LlamaFinetuneGGUF commited on
Commit
db769e0
Β·
unverified Β·
2 Parent(s): ba4e788 d4400a5

Merge branch 'main' into feat/improved-providers-list

Browse files
FAQ.md CHANGED
@@ -2,6 +2,18 @@
2
 
3
  # bolt.diy
4
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ## FAQ
6
 
7
  ### How do I get the best results with bolt.diy?
@@ -34,14 +46,18 @@ We have seen this error a couple times and for some reason just restarting the D
34
 
35
  We promise you that we are constantly testing new PRs coming into bolt.diy and the preview is core functionality, so the application is not broken! When you get a blank preview or don’t get a preview, this is generally because the LLM hallucinated bad code or incorrect commands. We are working on making this more transparent so it is obvious. Sometimes the error will appear in developer console too so check that as well.
36
 
37
- ### How to add a LLM:
38
 
39
- To make new LLMs available to use in this version of bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider.
40
 
41
- By default, Anthropic, OpenAI, Groq, and Ollama are implemented as providers, but the YouTube video for this repo covers how to extend this to work with more providers if you wish!
42
 
43
- When you add a new model to the MODEL_LIST array, it will immediately be available to use when you run the app locally or reload it. For Ollama models, make sure you have the model installed already before trying to use it here!
44
 
45
- ### Everything works but the results are bad
46
 
47
- This goes to the point above about how local LLMs are getting very powerful but you still are going to see better (sometimes much better) results with the largest LLMs like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b. If you are using smaller LLMs like Qwen-2.5-Coder, consider it more experimental and educational at this point. It can build smaller applications really well, which is super impressive for a local LLM, but for larger scale applications you want to use the larger LLMs still!
 
 
 
 
 
2
 
3
  # bolt.diy
4
 
5
+ ## Recommended Models for bolt.diy
6
+
7
+ For the best experience with bolt.diy, we recommend using the following models:
8
+
9
+ - **Claude 3.5 Sonnet (old)**: Best overall coder, providing excellent results across all use cases
10
+ - **Gemini 2.0 Flash**: Exceptional speed while maintaining good performance
11
+ - **GPT-4o**: Strong alternative to Claude 3.5 Sonnet with comparable capabilities
12
+ - **DeepSeekCoder V2 236b**: Best open source model (available through OpenRouter, DeepSeek API, or self-hosted)
13
+ - **Qwen 2.5 Coder 32b**: Best model for self-hosting with reasonable hardware requirements
14
+
15
+ **Note**: Models with less than 7b parameters typically lack the capability to properly interact with bolt!
16
+
17
  ## FAQ
18
 
19
  ### How do I get the best results with bolt.diy?
 
46
 
47
  We promise you that we are constantly testing new PRs coming into bolt.diy and the preview is core functionality, so the application is not broken! When you get a blank preview or don’t get a preview, this is generally because the LLM hallucinated bad code or incorrect commands. We are working on making this more transparent so it is obvious. Sometimes the error will appear in developer console too so check that as well.
48
 
49
+ ### Everything works but the results are bad
50
 
51
+ This goes to the point above about how local LLMs are getting very powerful but you still are going to see better (sometimes much better) results with the largest LLMs like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b. If you are using smaller LLMs like Qwen-2.5-Coder, consider it more experimental and educational at this point. It can build smaller applications really well, which is super impressive for a local LLM, but for larger scale applications you want to use the larger LLMs still!
52
 
53
+ ### Received structured exception #0xc0000005: access violation
54
 
55
+ If you are getting this, you are probably on Windows. The fix is generally to update the [Visual C++ Redistributable](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170)
56
 
57
+ ### How to add an LLM:
58
 
59
+ To make new LLMs available to use in this version of bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider.
60
+
61
+ By default, many providers are already implemented, but the YouTube video for this repo covers how to extend this to work with more providers if you wish!
62
+
63
+ When you add a new model to the MODEL_LIST array, it will immediately be available to use when you run the app locally or reload it.
README.md CHANGED
@@ -1,19 +1,32 @@
1
- [![bolt.diy: AI-Powered Full-Stack Web Development in the Browser](./public/social_preview_index.jpg)](https://bolt.diy)
2
-
3
  # bolt.diy (Previously oTToDev)
 
4
 
5
  Welcome to bolt.diy, the official open source version of Bolt.new (previously known as oTToDev and bolt.new ANY LLM), which allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
6
 
7
- Check the [bolt.diy Docs](https://stackblitz-labs.github.io/bolt.diy/) for more information. This documentation is still being updated after the transfer.
 
 
8
 
9
  bolt.diy was originally started by [Cole Medin](https://www.youtube.com/@ColeMedin) but has quickly grown into a massive community effort to build the BEST open source AI coding assistant!
10
 
11
- ## Join the community for bolt.diy!
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
- https://thinktank.ottomator.ai
14
 
15
 
16
- ## Requested Additions - Feel Free to Contribute!
17
 
18
  - βœ… OpenRouter Integration (@coleam00)
19
  - βœ… Gemini Integration (@jonathands)
@@ -60,7 +73,7 @@ https://thinktank.ottomator.ai
60
  - ⬜ Perplexity Integration
61
  - ⬜ Vertex AI Integration
62
 
63
- ## bolt.diy Features
64
 
65
  - **AI-powered full-stack web development** directly in your browser.
66
  - **Support for multiple LLMs** with an extensible architecture to integrate additional models.
@@ -70,7 +83,7 @@ https://thinktank.ottomator.ai
70
  - **Download projects as ZIP** for easy portability.
71
  - **Integration-ready Docker support** for a hassle-free setup.
72
 
73
- ## Setup bolt.diy
74
 
75
  If you're new to installing software from GitHub, don't worry! If you encounter any issues, feel free to submit an "issue" using the provided links or improve this documentation by forking the repository, editing the instructions, and submitting a pull request. The following instruction will help you get the stable branch up and running on your local machine in no time.
76
 
@@ -95,34 +108,6 @@ Clone the repository using Git:
95
  git clone -b stable https://github.com/stackblitz-labs/bolt.diy
96
  ```
97
 
98
- ### (Optional) Configure Environment Variables
99
-
100
- Most environment variables can be configured directly through the settings menu of the application. However, if you need to manually configure them:
101
-
102
- 1. Rename `.env.example` to `.env.local`.
103
- 2. Add your LLM API keys. For example:
104
-
105
- ```env
106
- GROQ_API_KEY=YOUR_GROQ_API_KEY
107
- OPENAI_API_KEY=YOUR_OPENAI_API_KEY
108
- ANTHROPIC_API_KEY=YOUR_ANTHROPIC_API_KEY
109
- ```
110
-
111
- **Note**: Ollama does not require an API key as it runs locally.
112
-
113
- 3. Optionally, set additional configurations:
114
-
115
- ```env
116
- # Debugging
117
- VITE_LOG_LEVEL=debug
118
-
119
- # Ollama settings (example: 8K context, localhost port 11434)
120
- OLLAMA_API_BASE_URL=http://localhost:11434
121
- DEFAULT_NUM_CTX=8192
122
- ```
123
-
124
- **Important**: Do not commit your `.env.local` file to version control. This file is already included in `.gitignore`.
125
-
126
  ---
127
 
128
  ## Run the Application
@@ -155,27 +140,30 @@ DEFAULT_NUM_CTX=8192
155
 
156
  Use the provided NPM scripts:
157
  ```bash
158
- npm run dockerbuild # Development build
159
- npm run dockerbuild:prod # Production build
160
  ```
161
 
162
  Alternatively, use Docker commands directly:
163
  ```bash
164
- docker build . --target bolt-ai-development # Development build
165
- docker build . --target bolt-ai-production # Production build
166
  ```
167
 
168
  2. **Run the Container**:
169
  Use Docker Compose profiles to manage environments:
170
  ```bash
171
- docker-compose --profile development up # Development
172
- docker-compose --profile production up # Production
173
  ```
174
 
175
  - With the development profile, changes to your code will automatically reflect in the running container (hot reloading).
176
 
177
  ---
178
 
 
 
 
 
 
 
179
  ### Update Your Local Version to the Latest
180
 
181
  To keep your local version of bolt.diy up to date with the latest changes, follow these steps for your operating system:
@@ -236,4 +224,4 @@ Explore upcoming features and priorities on our [Roadmap](https://roadmap.sh/r/o
236
 
237
  ## FAQ
238
 
239
- For answers to common questions, visit our [FAQ Page](FAQ.md).
 
 
 
1
  # bolt.diy (Previously oTToDev)
2
+ [![bolt.diy: AI-Powered Full-Stack Web Development in the Browser](./public/social_preview_index.jpg)](https://bolt.diy)
3
 
4
  Welcome to bolt.diy, the official open source version of Bolt.new (previously known as oTToDev and bolt.new ANY LLM), which allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
5
 
6
+ Check the [bolt.diy Docs](https://stackblitz-labs.github.io/bolt.diy/) for more information.
7
+
8
+ We have also launched an experimental agent called the "bolt.diy Expert" that can answer common questions about bolt.diy. Find it here on the [oTTomator Live Agent Studio](https://studio.ottomator.ai/).
9
 
10
  bolt.diy was originally started by [Cole Medin](https://www.youtube.com/@ColeMedin) but has quickly grown into a massive community effort to build the BEST open source AI coding assistant!
11
 
12
+ ## Table of Contents
13
+
14
+ - [Join the Community](#join-the-community)
15
+ - [Requested Additions](#requested-additions)
16
+ - [Features](#features)
17
+ - [Setup](#setup)
18
+ - [Run the Application](#run-the-application)
19
+ - [Available Scripts](#available-scripts)
20
+ - [Contributing](#contributing)
21
+ - [Roadmap](#roadmap)
22
+ - [FAQ](#faq)
23
+
24
+ ## Join the community
25
 
26
+ [Join the bolt.diy community here, in the thinktank on ottomator.ai!](https://thinktank.ottomator.ai)
27
 
28
 
29
+ ## Requested Additions
30
 
31
  - βœ… OpenRouter Integration (@coleam00)
32
  - βœ… Gemini Integration (@jonathands)
 
73
  - ⬜ Perplexity Integration
74
  - ⬜ Vertex AI Integration
75
 
76
+ ## Features
77
 
78
  - **AI-powered full-stack web development** directly in your browser.
79
  - **Support for multiple LLMs** with an extensible architecture to integrate additional models.
 
83
  - **Download projects as ZIP** for easy portability.
84
  - **Integration-ready Docker support** for a hassle-free setup.
85
 
86
+ ## Setup
87
 
88
  If you're new to installing software from GitHub, don't worry! If you encounter any issues, feel free to submit an "issue" using the provided links or improve this documentation by forking the repository, editing the instructions, and submitting a pull request. The following instruction will help you get the stable branch up and running on your local machine in no time.
89
 
 
108
  git clone -b stable https://github.com/stackblitz-labs/bolt.diy
109
  ```
110
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
111
  ---
112
 
113
  ## Run the Application
 
140
 
141
  Use the provided NPM scripts:
142
  ```bash
143
+ npm run dockerbuild
 
144
  ```
145
 
146
  Alternatively, use Docker commands directly:
147
  ```bash
148
+ docker build . --target bolt-ai-development
 
149
  ```
150
 
151
  2. **Run the Container**:
152
  Use Docker Compose profiles to manage environments:
153
  ```bash
154
+ docker-compose --profile development up
 
155
  ```
156
 
157
  - With the development profile, changes to your code will automatically reflect in the running container (hot reloading).
158
 
159
  ---
160
 
161
+ ### Entering API Keys
162
+
163
+ All of your API Keys can be configured directly in the application. Just selecte the provider you want from the dropdown and click the pencile icon to enter your API key.
164
+
165
+ ---
166
+
167
  ### Update Your Local Version to the Latest
168
 
169
  To keep your local version of bolt.diy up to date with the latest changes, follow these steps for your operating system:
 
224
 
225
  ## FAQ
226
 
227
+ For answers to common questions, issues, and to see a list of recommended models, visit our [FAQ Page](FAQ.md).
app/commit.json CHANGED
@@ -1 +1 @@
1
- { "commit": "eb6d4353565be31c6e20bfca2c5aea29e4f45b6d", "version": "0.0.3" }
 
1
+ { "commit": "a53b10ff399c591e898182e4b3934c26db19b6d6", "version": "0.0.3" }
app/components/chat/BaseChat.tsx CHANGED
@@ -119,6 +119,9 @@ export const BaseChat = React.forwardRef<HTMLDivElement, BaseChatProps>(
119
 
120
  useEffect(() => {
121
  // Load API keys from cookies on component mount
 
 
 
122
  try {
123
  const storedApiKeys = Cookies.get('apiKeys');
124
 
@@ -127,6 +130,7 @@ export const BaseChat = React.forwardRef<HTMLDivElement, BaseChatProps>(
127
 
128
  if (typeof parsedKeys === 'object' && parsedKeys !== null) {
129
  setApiKeys(parsedKeys);
 
130
  }
131
  }
132
  } catch (error) {
@@ -155,7 +159,7 @@ export const BaseChat = React.forwardRef<HTMLDivElement, BaseChatProps>(
155
  Cookies.remove('providers');
156
  }
157
 
158
- initializeModelList(providerSettings).then((modelList) => {
159
  setModelList(modelList);
160
  });
161
 
 
119
 
120
  useEffect(() => {
121
  // Load API keys from cookies on component mount
122
+
123
+ let parsedApiKeys: Record<string, string> | undefined = {};
124
+
125
  try {
126
  const storedApiKeys = Cookies.get('apiKeys');
127
 
 
130
 
131
  if (typeof parsedKeys === 'object' && parsedKeys !== null) {
132
  setApiKeys(parsedKeys);
133
+ parsedApiKeys = parsedKeys;
134
  }
135
  }
136
  } catch (error) {
 
159
  Cookies.remove('providers');
160
  }
161
 
162
+ initializeModelList({ apiKeys: parsedApiKeys, providerSettings }).then((modelList) => {
163
  setModelList(modelList);
164
  });
165
 
app/components/settings/SettingsWindow.tsx CHANGED
@@ -63,7 +63,7 @@ export const SettingsWindow = ({ open, onClose }: SettingsProps) => {
63
  variants={dialogBackdropVariants}
64
  />
65
  </RadixDialog.Overlay>
66
- <RadixDialog.Content asChild>
67
  <motion.div
68
  className="fixed top-[50%] left-[50%] z-max h-[85vh] w-[90vw] max-w-[900px] translate-x-[-50%] translate-y-[-50%] border border-bolt-elements-borderColor rounded-lg shadow-lg focus:outline-none overflow-hidden"
69
  initial="closed"
 
63
  variants={dialogBackdropVariants}
64
  />
65
  </RadixDialog.Overlay>
66
+ <RadixDialog.Content aria-describedby={undefined} asChild>
67
  <motion.div
68
  className="fixed top-[50%] left-[50%] z-max h-[85vh] w-[90vw] max-w-[900px] translate-x-[-50%] translate-y-[-50%] border border-bolt-elements-borderColor rounded-lg shadow-lg focus:outline-none overflow-hidden"
69
  initial="closed"
app/components/settings/debug/DebugTab.tsx CHANGED
@@ -2,6 +2,7 @@ import React, { useCallback, useEffect, useState } from 'react';
2
  import { useSettings } from '~/lib/hooks/useSettings';
3
  import commit from '~/commit.json';
4
  import { toast } from 'react-toastify';
 
5
 
6
  interface ProviderStatus {
7
  name: string;
@@ -236,7 +237,7 @@ const checkProviderStatus = async (url: string | null, providerName: string): Pr
236
  }
237
 
238
  // Try different endpoints based on provider
239
- const checkUrls = [`${url}/api/health`, `${url}/v1/models`];
240
  console.log(`[Debug] Checking additional endpoints:`, checkUrls);
241
 
242
  const results = await Promise.all(
@@ -321,14 +322,16 @@ export default function DebugTab() {
321
  .filter(([, provider]) => LOCAL_PROVIDERS.includes(provider.name))
322
  .map(async ([, provider]) => {
323
  const envVarName =
324
- provider.name.toLowerCase() === 'ollama'
325
- ? 'OLLAMA_API_BASE_URL'
326
- : provider.name.toLowerCase() === 'lmstudio'
327
- ? 'LMSTUDIO_API_BASE_URL'
328
- : `REACT_APP_${provider.name.toUpperCase()}_URL`;
329
 
330
  // Access environment variables through import.meta.env
331
- const url = import.meta.env[envVarName] || provider.settings.baseUrl || null; // Ensure baseUrl is used
 
 
 
 
 
 
332
  console.log(`[Debug] Using URL for ${provider.name}:`, url, `(from ${envVarName})`);
333
 
334
  const status = await checkProviderStatus(url, provider.name);
 
2
  import { useSettings } from '~/lib/hooks/useSettings';
3
  import commit from '~/commit.json';
4
  import { toast } from 'react-toastify';
5
+ import { providerBaseUrlEnvKeys } from '~/utils/constants';
6
 
7
  interface ProviderStatus {
8
  name: string;
 
237
  }
238
 
239
  // Try different endpoints based on provider
240
+ const checkUrls = [`${url}/api/health`, url.endsWith('v1') ? `${url}/models` : `${url}/v1/models`];
241
  console.log(`[Debug] Checking additional endpoints:`, checkUrls);
242
 
243
  const results = await Promise.all(
 
322
  .filter(([, provider]) => LOCAL_PROVIDERS.includes(provider.name))
323
  .map(async ([, provider]) => {
324
  const envVarName =
325
+ providerBaseUrlEnvKeys[provider.name].baseUrlKey || `REACT_APP_${provider.name.toUpperCase()}_URL`;
 
 
 
 
326
 
327
  // Access environment variables through import.meta.env
328
+ let settingsUrl = provider.settings.baseUrl;
329
+
330
+ if (settingsUrl && settingsUrl.trim().length === 0) {
331
+ settingsUrl = undefined;
332
+ }
333
+
334
+ const url = settingsUrl || import.meta.env[envVarName] || null; // Ensure baseUrl is used
335
  console.log(`[Debug] Using URL for ${provider.name}:`, url, `(from ${envVarName})`);
336
 
337
  const status = await checkProviderStatus(url, provider.name);
app/components/settings/providers/ProvidersTab.tsx CHANGED
@@ -7,9 +7,9 @@ import { logStore } from '~/lib/stores/logs';
7
 
8
  // Import a default fallback icon
9
  import DefaultIcon from '/icons/Default.svg';
10
-
11
- // List of advanced providers with correct casing
12
  const ADVANCED_PROVIDERS = ['Ollama', 'OpenAILike', 'LMStudio'];
 
 
13
 
14
  export default function ProvidersTab() {
15
  const { providers, updateProviderSettings, isLocalModel } = useSettings();
@@ -118,28 +118,77 @@ export default function ProvidersTab() {
118
  />
119
  </div>
120
 
121
- {/* Regular Providers Grid */}
122
- <div className="grid grid-cols-2 gap-4 mb-8">
123
- {regularProviders.map((provider) => (
124
- <ProviderCard key={provider.name} provider={provider} />
125
- ))}
126
- </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
127
 
128
- {/* Advanced Providers Section */}
129
- {advancedProviders.length > 0 && (
130
- <div className="mb-4 border-t border-bolt-elements-borderColor pt-4">
131
- <h3 className="text-bolt-elements-textSecondary text-lg font-medium mb-2">Experimental Providers</h3>
132
- <p className="text-bolt-elements-textSecondary mb-6">
133
- These providers are experimental features that allow you to run AI models locally or connect to your own infrastructure.
134
- They require additional setup but offer more flexibility for advanced users.
135
- </p>
136
- <div className="grid grid-cols-2 gap-4">
137
- {advancedProviders.map((provider) => (
138
- <ProviderCard key={provider.name} provider={provider} />
139
- ))}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
140
  </div>
141
- </div>
142
- )}
143
  </div>
144
  );
145
  }
 
7
 
8
  // Import a default fallback icon
9
  import DefaultIcon from '/icons/Default.svg';
 
 
10
  const ADVANCED_PROVIDERS = ['Ollama', 'OpenAILike', 'LMStudio'];
11
+ import { providerBaseUrlEnvKeys } from '~/utils/constants';
12
+
13
 
14
  export default function ProvidersTab() {
15
  const { providers, updateProviderSettings, isLocalModel } = useSettings();
 
118
  />
119
  </div>
120
 
121
+ {filteredProviders.map((provider) => {
122
+ const envBaseUrlKey = providerBaseUrlEnvKeys[provider.name].baseUrlKey;
123
+ const envBaseUrl = envBaseUrlKey ? import.meta.env[envBaseUrlKey] : undefined;
124
+
125
+ return (
126
+ <div
127
+ key={provider.name}
128
+ className="flex flex-col mb-2 provider-item hover:bg-bolt-elements-bg-depth-3 p-4 rounded-lg border border-bolt-elements-borderColor "
129
+ >
130
+ <div className="flex items-center justify-between mb-2">
131
+ <div className="flex items-center gap-2">
132
+ <img
133
+ src={`/icons/${provider.name}.svg`} // Attempt to load the specific icon
134
+ onError={(e) => {
135
+ // Fallback to default icon on error
136
+ e.currentTarget.src = DefaultIcon;
137
+ }}
138
+ alt={`${provider.name} icon`}
139
+ className="w-6 h-6 dark:invert"
140
+ />
141
+ <span className="text-bolt-elements-textPrimary">{provider.name}</span>
142
+ </div>
143
+ <Switch
144
+ className="ml-auto"
145
+ checked={provider.settings.enabled}
146
+ onCheckedChange={(enabled) => {
147
+ updateProviderSettings(provider.name, { ...provider.settings, enabled });
148
 
149
+ if (enabled) {
150
+ logStore.logProvider(`Provider ${provider.name} enabled`, { provider: provider.name });
151
+ } else {
152
+ logStore.logProvider(`Provider ${provider.name} disabled`, { provider: provider.name });
153
+ }
154
+ }}
155
+ />
156
+ </div>
157
+ {/* Base URL input for configurable providers */}
158
+ {URL_CONFIGURABLE_PROVIDERS.includes(provider.name) && provider.settings.enabled && (
159
+ <div className="mt-2">
160
+ {envBaseUrl && (
161
+ <label className="block text-xs text-bolt-elements-textSecondary text-green-300 mb-2">
162
+ Set On (.env) : {envBaseUrl}
163
+ </label>
164
+ )}
165
+ <label className="block text-sm text-bolt-elements-textSecondary mb-2">
166
+ {envBaseUrl ? 'Override Base Url' : 'Base URL '}:{' '}
167
+ </label>
168
+ <input
169
+ type="text"
170
+ value={provider.settings.baseUrl || ''}
171
+ onChange={(e) => {
172
+ let newBaseUrl: string | undefined = e.target.value;
173
+
174
+ if (newBaseUrl && newBaseUrl.trim().length === 0) {
175
+ newBaseUrl = undefined;
176
+ }
177
+
178
+ updateProviderSettings(provider.name, { ...provider.settings, baseUrl: newBaseUrl });
179
+ logStore.logProvider(`Base URL updated for ${provider.name}`, {
180
+ provider: provider.name,
181
+ baseUrl: newBaseUrl,
182
+ });
183
+ }}
184
+ placeholder={`Enter ${provider.name} base URL`}
185
+ className="w-full bg-white dark:bg-bolt-elements-background-depth-4 relative px-2 py-1.5 rounded-md focus:outline-none placeholder-bolt-elements-textTertiary text-bolt-elements-textPrimary dark:text-bolt-elements-textPrimary border border-bolt-elements-borderColor"
186
+ />
187
+ </div>
188
+ )}
189
  </div>
190
+ );
191
+ })}
192
  </div>
193
  );
194
  }
app/entry.server.tsx CHANGED
@@ -14,7 +14,7 @@ export default async function handleRequest(
14
  remixContext: EntryContext,
15
  _loadContext: AppLoadContext,
16
  ) {
17
- await initializeModelList();
18
 
19
  const readable = await renderToReadableStream(<RemixServer context={remixContext} url={request.url} />, {
20
  signal: request.signal,
 
14
  remixContext: EntryContext,
15
  _loadContext: AppLoadContext,
16
  ) {
17
+ await initializeModelList({});
18
 
19
  const readable = await renderToReadableStream(<RemixServer context={remixContext} url={request.url} />, {
20
  signal: request.signal,
app/lib/.server/llm/api-key.ts CHANGED
@@ -1,8 +1,6 @@
1
- /*
2
- * @ts-nocheck
3
- * Preventing TS checks with files presented in the video for a better presentation.
4
- */
5
  import { env } from 'node:process';
 
 
6
 
7
  export function getAPIKey(cloudflareEnv: Env, provider: string, userApiKeys?: Record<string, string>) {
8
  /**
@@ -15,7 +13,20 @@ export function getAPIKey(cloudflareEnv: Env, provider: string, userApiKeys?: Re
15
  return userApiKeys[provider];
16
  }
17
 
18
- // Fall back to environment variables
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  switch (provider) {
20
  case 'Anthropic':
21
  return env.ANTHROPIC_API_KEY || cloudflareEnv.ANTHROPIC_API_KEY;
@@ -50,16 +61,43 @@ export function getAPIKey(cloudflareEnv: Env, provider: string, userApiKeys?: Re
50
  }
51
  }
52
 
53
- export function getBaseURL(cloudflareEnv: Env, provider: string) {
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
  switch (provider) {
55
  case 'Together':
56
- return env.TOGETHER_API_BASE_URL || cloudflareEnv.TOGETHER_API_BASE_URL || 'https://api.together.xyz/v1';
 
 
 
 
 
57
  case 'OpenAILike':
58
- return env.OPENAI_LIKE_API_BASE_URL || cloudflareEnv.OPENAI_LIKE_API_BASE_URL;
59
  case 'LMStudio':
60
- return env.LMSTUDIO_API_BASE_URL || cloudflareEnv.LMSTUDIO_API_BASE_URL || 'http://localhost:1234';
 
 
61
  case 'Ollama': {
62
- let baseUrl = env.OLLAMA_API_BASE_URL || cloudflareEnv.OLLAMA_API_BASE_URL || 'http://localhost:11434';
 
63
 
64
  if (env.RUNNING_IN_DOCKER === 'true') {
65
  baseUrl = baseUrl.replace('localhost', 'host.docker.internal');
 
 
 
 
 
1
  import { env } from 'node:process';
2
+ import type { IProviderSetting } from '~/types/model';
3
+ import { getProviderBaseUrlAndKey } from '~/utils/constants';
4
 
5
  export function getAPIKey(cloudflareEnv: Env, provider: string, userApiKeys?: Record<string, string>) {
6
  /**
 
13
  return userApiKeys[provider];
14
  }
15
 
16
+ const { apiKey } = getProviderBaseUrlAndKey({
17
+ provider,
18
+ apiKeys: userApiKeys,
19
+ providerSettings: undefined,
20
+ serverEnv: cloudflareEnv as any,
21
+ defaultBaseUrlKey: '',
22
+ defaultApiTokenKey: '',
23
+ });
24
+
25
+ if (apiKey) {
26
+ return apiKey;
27
+ }
28
+
29
+ // Fall back to hardcoded environment variables names
30
  switch (provider) {
31
  case 'Anthropic':
32
  return env.ANTHROPIC_API_KEY || cloudflareEnv.ANTHROPIC_API_KEY;
 
61
  }
62
  }
63
 
64
+ export function getBaseURL(cloudflareEnv: Env, provider: string, providerSettings?: Record<string, IProviderSetting>) {
65
+ const { baseUrl } = getProviderBaseUrlAndKey({
66
+ provider,
67
+ apiKeys: {},
68
+ providerSettings,
69
+ serverEnv: cloudflareEnv as any,
70
+ defaultBaseUrlKey: '',
71
+ defaultApiTokenKey: '',
72
+ });
73
+
74
+ if (baseUrl) {
75
+ return baseUrl;
76
+ }
77
+
78
+ let settingBaseUrl = providerSettings?.[provider].baseUrl;
79
+
80
+ if (settingBaseUrl && settingBaseUrl.length == 0) {
81
+ settingBaseUrl = undefined;
82
+ }
83
+
84
  switch (provider) {
85
  case 'Together':
86
+ return (
87
+ settingBaseUrl ||
88
+ env.TOGETHER_API_BASE_URL ||
89
+ cloudflareEnv.TOGETHER_API_BASE_URL ||
90
+ 'https://api.together.xyz/v1'
91
+ );
92
  case 'OpenAILike':
93
+ return settingBaseUrl || env.OPENAI_LIKE_API_BASE_URL || cloudflareEnv.OPENAI_LIKE_API_BASE_URL;
94
  case 'LMStudio':
95
+ return (
96
+ settingBaseUrl || env.LMSTUDIO_API_BASE_URL || cloudflareEnv.LMSTUDIO_API_BASE_URL || 'http://localhost:1234'
97
+ );
98
  case 'Ollama': {
99
+ let baseUrl =
100
+ settingBaseUrl || env.OLLAMA_API_BASE_URL || cloudflareEnv.OLLAMA_API_BASE_URL || 'http://localhost:11434';
101
 
102
  if (env.RUNNING_IN_DOCKER === 'true') {
103
  baseUrl = baseUrl.replace('localhost', 'host.docker.internal');
app/lib/.server/llm/model.ts CHANGED
@@ -140,7 +140,7 @@ export function getPerplexityModel(apiKey: OptionalApiKey, model: string) {
140
  export function getModel(
141
  provider: string,
142
  model: string,
143
- env: Env,
144
  apiKeys?: Record<string, string>,
145
  providerSettings?: Record<string, IProviderSetting>,
146
  ) {
@@ -148,9 +148,12 @@ export function getModel(
148
  * let apiKey; // Declare first
149
  * let baseURL;
150
  */
 
151
 
152
- const apiKey = getAPIKey(env, provider, apiKeys); // Then assign
153
- const baseURL = providerSettings?.[provider].baseUrl || getBaseURL(env, provider);
 
 
154
 
155
  switch (provider) {
156
  case 'Anthropic':
 
140
  export function getModel(
141
  provider: string,
142
  model: string,
143
+ serverEnv: Env,
144
  apiKeys?: Record<string, string>,
145
  providerSettings?: Record<string, IProviderSetting>,
146
  ) {
 
148
  * let apiKey; // Declare first
149
  * let baseURL;
150
  */
151
+ // console.log({provider,model});
152
 
153
+ const apiKey = getAPIKey(serverEnv, provider, apiKeys); // Then assign
154
+ const baseURL = getBaseURL(serverEnv, provider, providerSettings);
155
+
156
+ // console.log({apiKey,baseURL});
157
 
158
  switch (provider) {
159
  case 'Anthropic':
app/lib/.server/llm/stream-text.ts CHANGED
@@ -151,10 +151,13 @@ export async function streamText(props: {
151
  providerSettings?: Record<string, IProviderSetting>;
152
  promptId?: string;
153
  }) {
154
- const { messages, env, options, apiKeys, files, providerSettings, promptId } = props;
 
 
 
155
  let currentModel = DEFAULT_MODEL;
156
  let currentProvider = DEFAULT_PROVIDER.name;
157
- const MODEL_LIST = await getModelList(apiKeys || {}, providerSettings);
158
  const processedMessages = messages.map((message) => {
159
  if (message.role === 'user') {
160
  const { model, provider, content } = extractPropertiesFromMessage(message);
@@ -196,7 +199,7 @@ export async function streamText(props: {
196
  }
197
 
198
  return _streamText({
199
- model: getModel(currentProvider, currentModel, env, apiKeys, providerSettings) as any,
200
  system: systemPrompt,
201
  maxTokens: dynamicMaxTokens,
202
  messages: convertToCoreMessages(processedMessages as any),
 
151
  providerSettings?: Record<string, IProviderSetting>;
152
  promptId?: string;
153
  }) {
154
+ const { messages, env: serverEnv, options, apiKeys, files, providerSettings, promptId } = props;
155
+
156
+ // console.log({serverEnv});
157
+
158
  let currentModel = DEFAULT_MODEL;
159
  let currentProvider = DEFAULT_PROVIDER.name;
160
+ const MODEL_LIST = await getModelList({ apiKeys, providerSettings, serverEnv: serverEnv as any });
161
  const processedMessages = messages.map((message) => {
162
  if (message.role === 'user') {
163
  const { model, provider, content } = extractPropertiesFromMessage(message);
 
199
  }
200
 
201
  return _streamText({
202
+ model: getModel(currentProvider, currentModel, serverEnv, apiKeys, providerSettings) as any,
203
  system: systemPrompt,
204
  maxTokens: dynamicMaxTokens,
205
  messages: convertToCoreMessages(processedMessages as any),
app/lib/hooks/useEditChatDescription.ts CHANGED
@@ -92,7 +92,9 @@ export function useEditChatDescription({
92
  }
93
 
94
  const lengthValid = trimmedDesc.length > 0 && trimmedDesc.length <= 100;
95
- const characterValid = /^[a-zA-Z0-9\s]+$/.test(trimmedDesc);
 
 
96
 
97
  if (!lengthValid) {
98
  toast.error('Description must be between 1 and 100 characters.');
@@ -100,7 +102,7 @@ export function useEditChatDescription({
100
  }
101
 
102
  if (!characterValid) {
103
- toast.error('Description can only contain alphanumeric characters and spaces.');
104
  return false;
105
  }
106
 
 
92
  }
93
 
94
  const lengthValid = trimmedDesc.length > 0 && trimmedDesc.length <= 100;
95
+
96
+ // Allow letters, numbers, spaces, and common punctuation but exclude characters that could cause issues
97
+ const characterValid = /^[a-zA-Z0-9\s\-_.,!?()[\]{}'"]+$/.test(trimmedDesc);
98
 
99
  if (!lengthValid) {
100
  toast.error('Description must be between 1 and 100 characters.');
 
102
  }
103
 
104
  if (!characterValid) {
105
+ toast.error('Description can only contain letters, numbers, spaces, and basic punctuation.');
106
  return false;
107
  }
108
 
app/types/model.ts CHANGED
@@ -3,7 +3,12 @@ import type { ModelInfo } from '~/utils/types';
3
  export type ProviderInfo = {
4
  staticModels: ModelInfo[];
5
  name: string;
6
- getDynamicModels?: (apiKeys?: Record<string, string>, providerSettings?: IProviderSetting) => Promise<ModelInfo[]>;
 
 
 
 
 
7
  getApiKeyLink?: string;
8
  labelForGetApiKey?: string;
9
  icon?: string;
 
3
  export type ProviderInfo = {
4
  staticModels: ModelInfo[];
5
  name: string;
6
+ getDynamicModels?: (
7
+ providerName: string,
8
+ apiKeys?: Record<string, string>,
9
+ providerSettings?: IProviderSetting,
10
+ serverEnv?: Record<string, string>,
11
+ ) => Promise<ModelInfo[]>;
12
  getApiKeyLink?: string;
13
  labelForGetApiKey?: string;
14
  icon?: string;
app/utils/constants.ts CHANGED
@@ -220,7 +220,6 @@ const PROVIDER_LIST: ProviderInfo[] = [
220
  ],
221
  getApiKeyLink: 'https://huggingface.co/settings/tokens',
222
  },
223
-
224
  {
225
  name: 'OpenAI',
226
  staticModels: [
@@ -233,7 +232,10 @@ const PROVIDER_LIST: ProviderInfo[] = [
233
  },
234
  {
235
  name: 'xAI',
236
- staticModels: [{ name: 'grok-beta', label: 'xAI Grok Beta', provider: 'xAI', maxTokenAllowed: 8000 }],
 
 
 
237
  getApiKeyLink: 'https://docs.x.ai/docs/quickstart#creating-an-api-key',
238
  },
239
  {
@@ -319,44 +321,130 @@ const PROVIDER_LIST: ProviderInfo[] = [
319
  },
320
  ];
321
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
322
  export const DEFAULT_PROVIDER = PROVIDER_LIST[0];
323
 
324
  const staticModels: ModelInfo[] = PROVIDER_LIST.map((p) => p.staticModels).flat();
325
 
326
  export let MODEL_LIST: ModelInfo[] = [...staticModels];
327
 
328
- export async function getModelList(
329
- apiKeys: Record<string, string>,
330
- providerSettings?: Record<string, IProviderSetting>,
331
- ) {
 
 
 
332
  MODEL_LIST = [
333
  ...(
334
  await Promise.all(
335
  PROVIDER_LIST.filter(
336
  (p): p is ProviderInfo & { getDynamicModels: () => Promise<ModelInfo[]> } => !!p.getDynamicModels,
337
- ).map((p) => p.getDynamicModels(apiKeys, providerSettings?.[p.name])),
338
  )
339
  ).flat(),
340
  ...staticModels,
341
  ];
 
342
  return MODEL_LIST;
343
  }
344
 
345
- async function getTogetherModels(apiKeys?: Record<string, string>, settings?: IProviderSetting): Promise<ModelInfo[]> {
 
 
 
 
 
346
  try {
347
- const baseUrl = settings?.baseUrl || import.meta.env.TOGETHER_API_BASE_URL || '';
348
- const provider = 'Together';
 
 
 
 
 
 
349
 
350
  if (!baseUrl) {
351
  return [];
352
  }
353
 
354
- let apiKey = import.meta.env.OPENAI_LIKE_API_KEY ?? '';
355
-
356
- if (apiKeys && apiKeys[provider]) {
357
- apiKey = apiKeys[provider];
358
- }
359
-
360
  if (!apiKey) {
361
  return [];
362
  }
@@ -374,7 +462,7 @@ async function getTogetherModels(apiKeys?: Record<string, string>, settings?: IP
374
  label: `${m.display_name} - in:$${m.pricing.input.toFixed(
375
  2,
376
  )} out:$${m.pricing.output.toFixed(2)} - context ${Math.floor(m.context_length / 1000)}k`,
377
- provider,
378
  maxTokenAllowed: 8000,
379
  }));
380
  } catch (e) {
@@ -383,24 +471,40 @@ async function getTogetherModels(apiKeys?: Record<string, string>, settings?: IP
383
  }
384
  }
385
 
386
- const getOllamaBaseUrl = (settings?: IProviderSetting) => {
387
- const defaultBaseUrl = settings?.baseUrl || import.meta.env.OLLAMA_API_BASE_URL || 'http://localhost:11434';
 
 
 
 
 
 
388
 
389
  // Check if we're in the browser
390
  if (typeof window !== 'undefined') {
391
  // Frontend always uses localhost
392
- return defaultBaseUrl;
393
  }
394
 
395
  // Backend: Check if we're running in Docker
396
  const isDocker = process.env.RUNNING_IN_DOCKER === 'true';
397
 
398
- return isDocker ? defaultBaseUrl.replace('localhost', 'host.docker.internal') : defaultBaseUrl;
399
  };
400
 
401
- async function getOllamaModels(apiKeys?: Record<string, string>, settings?: IProviderSetting): Promise<ModelInfo[]> {
 
 
 
 
 
402
  try {
403
- const baseUrl = getOllamaBaseUrl(settings);
 
 
 
 
 
404
  const response = await fetch(`${baseUrl}/api/tags`);
405
  const data = (await response.json()) as OllamaApiResponse;
406
 
@@ -419,22 +523,25 @@ async function getOllamaModels(apiKeys?: Record<string, string>, settings?: IPro
419
  }
420
 
421
  async function getOpenAILikeModels(
 
422
  apiKeys?: Record<string, string>,
423
  settings?: IProviderSetting,
 
424
  ): Promise<ModelInfo[]> {
425
  try {
426
- const baseUrl = settings?.baseUrl || import.meta.env.OPENAI_LIKE_API_BASE_URL || '';
 
 
 
 
 
 
 
427
 
428
  if (!baseUrl) {
429
  return [];
430
  }
431
 
432
- let apiKey = '';
433
-
434
- if (apiKeys && apiKeys.OpenAILike) {
435
- apiKey = apiKeys.OpenAILike;
436
- }
437
-
438
  const response = await fetch(`${baseUrl}/models`, {
439
  headers: {
440
  Authorization: `Bearer ${apiKey}`,
@@ -445,7 +552,7 @@ async function getOpenAILikeModels(
445
  return res.data.map((model: any) => ({
446
  name: model.id,
447
  label: model.id,
448
- provider: 'OpenAILike',
449
  }));
450
  } catch (e) {
451
  console.error('Error getting OpenAILike models:', e);
@@ -486,9 +593,26 @@ async function getOpenRouterModels(): Promise<ModelInfo[]> {
486
  }));
487
  }
488
 
489
- async function getLMStudioModels(_apiKeys?: Record<string, string>, settings?: IProviderSetting): Promise<ModelInfo[]> {
 
 
 
 
 
490
  try {
491
- const baseUrl = settings?.baseUrl || import.meta.env.LMSTUDIO_API_BASE_URL || 'http://localhost:1234';
 
 
 
 
 
 
 
 
 
 
 
 
492
  const response = await fetch(`${baseUrl}/v1/models`);
493
  const data = (await response.json()) as any;
494
 
@@ -503,29 +627,37 @@ async function getLMStudioModels(_apiKeys?: Record<string, string>, settings?: I
503
  }
504
  }
505
 
506
- async function initializeModelList(providerSettings?: Record<string, IProviderSetting>): Promise<ModelInfo[]> {
507
- let apiKeys: Record<string, string> = {};
 
 
 
 
 
508
 
509
- try {
510
- const storedApiKeys = Cookies.get('apiKeys');
 
511
 
512
- if (storedApiKeys) {
513
- const parsedKeys = JSON.parse(storedApiKeys);
514
 
515
- if (typeof parsedKeys === 'object' && parsedKeys !== null) {
516
- apiKeys = parsedKeys;
 
517
  }
 
 
 
518
  }
519
- } catch (error: any) {
520
- logStore.logError('Failed to fetch API keys from cookies', error);
521
- logger.warn(`Failed to fetch apikeys from cookies: ${error?.message}`);
522
  }
 
523
  MODEL_LIST = [
524
  ...(
525
  await Promise.all(
526
  PROVIDER_LIST.filter(
527
  (p): p is ProviderInfo & { getDynamicModels: () => Promise<ModelInfo[]> } => !!p.getDynamicModels,
528
- ).map((p) => p.getDynamicModels(apiKeys, providerSettings?.[p.name])),
529
  )
530
  ).flat(),
531
  ...staticModels,
@@ -534,6 +666,7 @@ async function initializeModelList(providerSettings?: Record<string, IProviderSe
534
  return MODEL_LIST;
535
  }
536
 
 
537
  export {
538
  getOllamaModels,
539
  getOpenAILikeModels,
 
220
  ],
221
  getApiKeyLink: 'https://huggingface.co/settings/tokens',
222
  },
 
223
  {
224
  name: 'OpenAI',
225
  staticModels: [
 
232
  },
233
  {
234
  name: 'xAI',
235
+ staticModels: [
236
+ { name: 'grok-beta', label: 'xAI Grok Beta', provider: 'xAI', maxTokenAllowed: 8000 },
237
+ { name: 'grok-2-1212', label: 'xAI Grok2 1212', provider: 'xAI', maxTokenAllowed: 8000 },
238
+ ],
239
  getApiKeyLink: 'https://docs.x.ai/docs/quickstart#creating-an-api-key',
240
  },
241
  {
 
321
  },
322
  ];
323
 
324
+ export const providerBaseUrlEnvKeys: Record<string, { baseUrlKey?: string; apiTokenKey?: string }> = {
325
+ Anthropic: {
326
+ apiTokenKey: 'ANTHROPIC_API_KEY',
327
+ },
328
+ OpenAI: {
329
+ apiTokenKey: 'OPENAI_API_KEY',
330
+ },
331
+ Groq: {
332
+ apiTokenKey: 'GROQ_API_KEY',
333
+ },
334
+ HuggingFace: {
335
+ apiTokenKey: 'HuggingFace_API_KEY',
336
+ },
337
+ OpenRouter: {
338
+ apiTokenKey: 'OPEN_ROUTER_API_KEY',
339
+ },
340
+ Google: {
341
+ apiTokenKey: 'GOOGLE_GENERATIVE_AI_API_KEY',
342
+ },
343
+ OpenAILike: {
344
+ baseUrlKey: 'OPENAI_LIKE_API_BASE_URL',
345
+ apiTokenKey: 'OPENAI_LIKE_API_KEY',
346
+ },
347
+ Together: {
348
+ baseUrlKey: 'TOGETHER_API_BASE_URL',
349
+ apiTokenKey: 'TOGETHER_API_KEY',
350
+ },
351
+ Deepseek: {
352
+ apiTokenKey: 'DEEPSEEK_API_KEY',
353
+ },
354
+ Mistral: {
355
+ apiTokenKey: 'MISTRAL_API_KEY',
356
+ },
357
+ LMStudio: {
358
+ baseUrlKey: 'LMSTUDIO_API_BASE_URL',
359
+ },
360
+ xAI: {
361
+ apiTokenKey: 'XAI_API_KEY',
362
+ },
363
+ Cohere: {
364
+ apiTokenKey: 'COHERE_API_KEY',
365
+ },
366
+ Perplexity: {
367
+ apiTokenKey: 'PERPLEXITY_API_KEY',
368
+ },
369
+ Ollama: {
370
+ baseUrlKey: 'OLLAMA_API_BASE_URL',
371
+ },
372
+ };
373
+
374
+ export const getProviderBaseUrlAndKey = (options: {
375
+ provider: string;
376
+ apiKeys?: Record<string, string>;
377
+ providerSettings?: IProviderSetting;
378
+ serverEnv?: Record<string, string>;
379
+ defaultBaseUrlKey: string;
380
+ defaultApiTokenKey: string;
381
+ }) => {
382
+ const { provider, apiKeys, providerSettings, serverEnv, defaultBaseUrlKey, defaultApiTokenKey } = options;
383
+ let settingsBaseUrl = providerSettings?.baseUrl;
384
+
385
+ if (settingsBaseUrl && settingsBaseUrl.length == 0) {
386
+ settingsBaseUrl = undefined;
387
+ }
388
+
389
+ const baseUrlKey = providerBaseUrlEnvKeys[provider]?.baseUrlKey || defaultBaseUrlKey;
390
+ const baseUrl = settingsBaseUrl || serverEnv?.[baseUrlKey] || process.env[baseUrlKey] || import.meta.env[baseUrlKey];
391
+
392
+ const apiTokenKey = providerBaseUrlEnvKeys[provider]?.apiTokenKey || defaultApiTokenKey;
393
+ const apiKey =
394
+ apiKeys?.[provider] || serverEnv?.[apiTokenKey] || process.env[apiTokenKey] || import.meta.env[apiTokenKey];
395
+
396
+ return {
397
+ baseUrl,
398
+ apiKey,
399
+ };
400
+ };
401
  export const DEFAULT_PROVIDER = PROVIDER_LIST[0];
402
 
403
  const staticModels: ModelInfo[] = PROVIDER_LIST.map((p) => p.staticModels).flat();
404
 
405
  export let MODEL_LIST: ModelInfo[] = [...staticModels];
406
 
407
+ export async function getModelList(options: {
408
+ apiKeys?: Record<string, string>;
409
+ providerSettings?: Record<string, IProviderSetting>;
410
+ serverEnv?: Record<string, string>;
411
+ }) {
412
+ const { apiKeys, providerSettings, serverEnv } = options;
413
+
414
  MODEL_LIST = [
415
  ...(
416
  await Promise.all(
417
  PROVIDER_LIST.filter(
418
  (p): p is ProviderInfo & { getDynamicModels: () => Promise<ModelInfo[]> } => !!p.getDynamicModels,
419
+ ).map((p) => p.getDynamicModels(p.name, apiKeys, providerSettings?.[p.name], serverEnv)),
420
  )
421
  ).flat(),
422
  ...staticModels,
423
  ];
424
+
425
  return MODEL_LIST;
426
  }
427
 
428
+ async function getTogetherModels(
429
+ name: string,
430
+ apiKeys?: Record<string, string>,
431
+ settings?: IProviderSetting,
432
+ serverEnv: Record<string, string> = {},
433
+ ): Promise<ModelInfo[]> {
434
  try {
435
+ const { baseUrl, apiKey } = getProviderBaseUrlAndKey({
436
+ provider: name,
437
+ apiKeys,
438
+ providerSettings: settings,
439
+ serverEnv,
440
+ defaultBaseUrlKey: 'TOGETHER_API_BASE_URL',
441
+ defaultApiTokenKey: 'TOGETHER_API_KEY',
442
+ });
443
 
444
  if (!baseUrl) {
445
  return [];
446
  }
447
 
 
 
 
 
 
 
448
  if (!apiKey) {
449
  return [];
450
  }
 
462
  label: `${m.display_name} - in:$${m.pricing.input.toFixed(
463
  2,
464
  )} out:$${m.pricing.output.toFixed(2)} - context ${Math.floor(m.context_length / 1000)}k`,
465
+ provider: name,
466
  maxTokenAllowed: 8000,
467
  }));
468
  } catch (e) {
 
471
  }
472
  }
473
 
474
+ const getOllamaBaseUrl = (name: string, settings?: IProviderSetting, serverEnv: Record<string, string> = {}) => {
475
+ const { baseUrl } = getProviderBaseUrlAndKey({
476
+ provider: name,
477
+ providerSettings: settings,
478
+ serverEnv,
479
+ defaultBaseUrlKey: 'OLLAMA_API_BASE_URL',
480
+ defaultApiTokenKey: '',
481
+ });
482
 
483
  // Check if we're in the browser
484
  if (typeof window !== 'undefined') {
485
  // Frontend always uses localhost
486
+ return baseUrl;
487
  }
488
 
489
  // Backend: Check if we're running in Docker
490
  const isDocker = process.env.RUNNING_IN_DOCKER === 'true';
491
 
492
+ return isDocker ? baseUrl.replace('localhost', 'host.docker.internal') : baseUrl;
493
  };
494
 
495
+ async function getOllamaModels(
496
+ name: string,
497
+ _apiKeys?: Record<string, string>,
498
+ settings?: IProviderSetting,
499
+ serverEnv: Record<string, string> = {},
500
+ ): Promise<ModelInfo[]> {
501
  try {
502
+ const baseUrl = getOllamaBaseUrl(name, settings, serverEnv);
503
+
504
+ if (!baseUrl) {
505
+ return [];
506
+ }
507
+
508
  const response = await fetch(`${baseUrl}/api/tags`);
509
  const data = (await response.json()) as OllamaApiResponse;
510
 
 
523
  }
524
 
525
  async function getOpenAILikeModels(
526
+ name: string,
527
  apiKeys?: Record<string, string>,
528
  settings?: IProviderSetting,
529
+ serverEnv: Record<string, string> = {},
530
  ): Promise<ModelInfo[]> {
531
  try {
532
+ const { baseUrl, apiKey } = getProviderBaseUrlAndKey({
533
+ provider: name,
534
+ apiKeys,
535
+ providerSettings: settings,
536
+ serverEnv,
537
+ defaultBaseUrlKey: 'OPENAI_LIKE_API_BASE_URL',
538
+ defaultApiTokenKey: 'OPENAI_LIKE_API_KEY',
539
+ });
540
 
541
  if (!baseUrl) {
542
  return [];
543
  }
544
 
 
 
 
 
 
 
545
  const response = await fetch(`${baseUrl}/models`, {
546
  headers: {
547
  Authorization: `Bearer ${apiKey}`,
 
552
  return res.data.map((model: any) => ({
553
  name: model.id,
554
  label: model.id,
555
+ provider: name,
556
  }));
557
  } catch (e) {
558
  console.error('Error getting OpenAILike models:', e);
 
593
  }));
594
  }
595
 
596
+ async function getLMStudioModels(
597
+ name: string,
598
+ apiKeys?: Record<string, string>,
599
+ settings?: IProviderSetting,
600
+ serverEnv: Record<string, string> = {},
601
+ ): Promise<ModelInfo[]> {
602
  try {
603
+ const { baseUrl } = getProviderBaseUrlAndKey({
604
+ provider: name,
605
+ apiKeys,
606
+ providerSettings: settings,
607
+ serverEnv,
608
+ defaultBaseUrlKey: 'LMSTUDIO_API_BASE_URL',
609
+ defaultApiTokenKey: '',
610
+ });
611
+
612
+ if (!baseUrl) {
613
+ return [];
614
+ }
615
+
616
  const response = await fetch(`${baseUrl}/v1/models`);
617
  const data = (await response.json()) as any;
618
 
 
627
  }
628
  }
629
 
630
+ async function initializeModelList(options: {
631
+ env?: Record<string, string>;
632
+ providerSettings?: Record<string, IProviderSetting>;
633
+ apiKeys?: Record<string, string>;
634
+ }): Promise<ModelInfo[]> {
635
+ const { providerSettings, apiKeys: providedApiKeys, env } = options;
636
+ let apiKeys: Record<string, string> = providedApiKeys || {};
637
 
638
+ if (!providedApiKeys) {
639
+ try {
640
+ const storedApiKeys = Cookies.get('apiKeys');
641
 
642
+ if (storedApiKeys) {
643
+ const parsedKeys = JSON.parse(storedApiKeys);
644
 
645
+ if (typeof parsedKeys === 'object' && parsedKeys !== null) {
646
+ apiKeys = parsedKeys;
647
+ }
648
  }
649
+ } catch (error: any) {
650
+ logStore.logError('Failed to fetch API keys from cookies', error);
651
+ logger.warn(`Failed to fetch apikeys from cookies: ${error?.message}`);
652
  }
 
 
 
653
  }
654
+
655
  MODEL_LIST = [
656
  ...(
657
  await Promise.all(
658
  PROVIDER_LIST.filter(
659
  (p): p is ProviderInfo & { getDynamicModels: () => Promise<ModelInfo[]> } => !!p.getDynamicModels,
660
+ ).map((p) => p.getDynamicModels(p.name, apiKeys, providerSettings?.[p.name], env)),
661
  )
662
  ).flat(),
663
  ...staticModels,
 
666
  return MODEL_LIST;
667
  }
668
 
669
+ // initializeModelList({})
670
  export {
671
  getOllamaModels,
672
  getOpenAILikeModels,
app/utils/shell.ts CHANGED
@@ -105,6 +105,7 @@ export class BoltShell {
105
  * this.#shellInputStream?.write('\x03');
106
  */
107
  this.terminal.input('\x03');
 
108
 
109
  if (state && state.executionPrms) {
110
  await state.executionPrms;
 
105
  * this.#shellInputStream?.write('\x03');
106
  */
107
  this.terminal.input('\x03');
108
+ await this.waitTillOscCode('prompt');
109
 
110
  if (state && state.executionPrms) {
111
  await state.executionPrms;
docs/docs/FAQ.md CHANGED
@@ -1,5 +1,19 @@
1
  # Frequently Asked Questions (FAQ)
2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ## How do I get the best results with bolt.diy?
4
 
5
  - **Be specific about your stack**:
@@ -72,6 +86,12 @@ Local LLMs like Qwen-2.5-Coder are powerful for small applications but still exp
72
 
73
  ---
74
 
 
 
 
 
 
 
75
  ### **"Miniflare or Wrangler errors in Windows"**
76
  You will need to make sure you have the latest version of Visual Studio C++ installed (14.40.33816), more information here https://github.com/stackblitz-labs/bolt.diy/issues/19.
77
 
 
1
  # Frequently Asked Questions (FAQ)
2
 
3
+ ## What are the best models for bolt.diy?
4
+
5
+ For the best experience with bolt.diy, we recommend using the following models:
6
+
7
+ - **Claude 3.5 Sonnet (old)**: Best overall coder, providing excellent results across all use cases
8
+ - **Gemini 2.0 Flash**: Exceptional speed while maintaining good performance
9
+ - **GPT-4o**: Strong alternative to Claude 3.5 Sonnet with comparable capabilities
10
+ - **DeepSeekCoder V2 236b**: Best open source model (available through OpenRouter, DeepSeek API, or self-hosted)
11
+ - **Qwen 2.5 Coder 32b**: Best model for self-hosting with reasonable hardware requirements
12
+
13
+ **Note**: Models with less than 7b parameters typically lack the capability to properly interact with bolt!
14
+
15
+ ---
16
+
17
  ## How do I get the best results with bolt.diy?
18
 
19
  - **Be specific about your stack**:
 
86
 
87
  ---
88
 
89
+ ### **"Received structured exception #0xc0000005: access violation"**
90
+
91
+ If you are getting this, you are probably on Windows. The fix is generally to update the [Visual C++ Redistributable](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170)
92
+
93
+ ---
94
+
95
  ### **"Miniflare or Wrangler errors in Windows"**
96
  You will need to make sure you have the latest version of Visual Studio C++ installed (14.40.33816), more information here https://github.com/stackblitz-labs/bolt.diy/issues/19.
97
 
pre-start.cjs CHANGED
@@ -7,4 +7,5 @@ console.log(`
7
  β˜…β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β˜…
8
  `);
9
  console.log('πŸ“ Current Commit Version:', commit);
 
10
  console.log('β˜…β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β˜…');
 
7
  β˜…β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β˜…
8
  `);
9
  console.log('πŸ“ Current Commit Version:', commit);
10
+ console.log(' Please wait until the URL appears here')
11
  console.log('β˜…β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β˜…');
vite.config.ts CHANGED
@@ -28,7 +28,7 @@ export default defineConfig((config) => {
28
  chrome129IssuePlugin(),
29
  config.mode === 'production' && optimizeCssModules({ apply: 'build' }),
30
  ],
31
- envPrefix: ["VITE_", "OPENAI_LIKE_API_", "OLLAMA_API_BASE_URL", "LMSTUDIO_API_BASE_URL","TOGETHER_API_BASE_URL"],
32
  css: {
33
  preprocessorOptions: {
34
  scss: {
 
28
  chrome129IssuePlugin(),
29
  config.mode === 'production' && optimizeCssModules({ apply: 'build' }),
30
  ],
31
+ envPrefix: ["VITE_","OPENAI_LIKE_API_BASE_URL", "OLLAMA_API_BASE_URL", "LMSTUDIO_API_BASE_URL","TOGETHER_API_BASE_URL"],
32
  css: {
33
  preprocessorOptions: {
34
  scss: {