LlamaFinetuneGGUF commited on
Commit
f2662c1
Β·
unverified Β·
2 Parent(s): 602f65a 2638c1a

Merge branch 'main' into docs-setup-updated

Browse files
FAQ.md CHANGED
@@ -2,6 +2,18 @@
2
 
3
  # bolt.diy
4
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ## FAQ
6
 
7
  ### How do I get the best results with bolt.diy?
@@ -34,14 +46,18 @@ We have seen this error a couple times and for some reason just restarting the D
34
 
35
  We promise you that we are constantly testing new PRs coming into bolt.diy and the preview is core functionality, so the application is not broken! When you get a blank preview or don’t get a preview, this is generally because the LLM hallucinated bad code or incorrect commands. We are working on making this more transparent so it is obvious. Sometimes the error will appear in developer console too so check that as well.
36
 
37
- ### How to add a LLM:
38
 
39
- To make new LLMs available to use in this version of bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider.
40
 
41
- By default, Anthropic, OpenAI, Groq, and Ollama are implemented as providers, but the YouTube video for this repo covers how to extend this to work with more providers if you wish!
42
 
43
- When you add a new model to the MODEL_LIST array, it will immediately be available to use when you run the app locally or reload it. For Ollama models, make sure you have the model installed already before trying to use it here!
44
 
45
- ### Everything works but the results are bad
46
 
47
- This goes to the point above about how local LLMs are getting very powerful but you still are going to see better (sometimes much better) results with the largest LLMs like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b. If you are using smaller LLMs like Qwen-2.5-Coder, consider it more experimental and educational at this point. It can build smaller applications really well, which is super impressive for a local LLM, but for larger scale applications you want to use the larger LLMs still!
 
 
 
 
 
2
 
3
  # bolt.diy
4
 
5
+ ## Recommended Models for bolt.diy
6
+
7
+ For the best experience with bolt.diy, we recommend using the following models:
8
+
9
+ - **Claude 3.5 Sonnet (old)**: Best overall coder, providing excellent results across all use cases
10
+ - **Gemini 2.0 Flash**: Exceptional speed while maintaining good performance
11
+ - **GPT-4o**: Strong alternative to Claude 3.5 Sonnet with comparable capabilities
12
+ - **DeepSeekCoder V2 236b**: Best open source model (available through OpenRouter, DeepSeek API, or self-hosted)
13
+ - **Qwen 2.5 Coder 32b**: Best model for self-hosting with reasonable hardware requirements
14
+
15
+ **Note**: Models with less than 7b parameters typically lack the capability to properly interact with bolt!
16
+
17
  ## FAQ
18
 
19
  ### How do I get the best results with bolt.diy?
 
46
 
47
  We promise you that we are constantly testing new PRs coming into bolt.diy and the preview is core functionality, so the application is not broken! When you get a blank preview or don’t get a preview, this is generally because the LLM hallucinated bad code or incorrect commands. We are working on making this more transparent so it is obvious. Sometimes the error will appear in developer console too so check that as well.
48
 
49
+ ### Everything works but the results are bad
50
 
51
+ This goes to the point above about how local LLMs are getting very powerful but you still are going to see better (sometimes much better) results with the largest LLMs like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b. If you are using smaller LLMs like Qwen-2.5-Coder, consider it more experimental and educational at this point. It can build smaller applications really well, which is super impressive for a local LLM, but for larger scale applications you want to use the larger LLMs still!
52
 
53
+ ### Received structured exception #0xc0000005: access violation
54
 
55
+ If you are getting this, you are probably on Windows. The fix is generally to update the [Visual C++ Redistributable](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170)
56
 
57
+ ### How to add an LLM:
58
 
59
+ To make new LLMs available to use in this version of bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider.
60
+
61
+ By default, many providers are already implemented, but the YouTube video for this repo covers how to extend this to work with more providers if you wish!
62
+
63
+ When you add a new model to the MODEL_LIST array, it will immediately be available to use when you run the app locally or reload it.
README.md CHANGED
@@ -1,19 +1,32 @@
1
- [![bolt.diy: AI-Powered Full-Stack Web Development in the Browser](./public/social_preview_index.jpg)](https://bolt.diy)
2
-
3
  # bolt.diy (Previously oTToDev)
 
4
 
5
  Welcome to bolt.diy, the official open source version of Bolt.new (previously known as oTToDev and bolt.new ANY LLM), which allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
6
 
7
- Check the [bolt.diy Docs](https://stackblitz-labs.github.io/bolt.diy/) for more information. This documentation is still being updated after the transfer.
 
 
8
 
9
  bolt.diy was originally started by [Cole Medin](https://www.youtube.com/@ColeMedin) but has quickly grown into a massive community effort to build the BEST open source AI coding assistant!
10
 
11
- ## Join the community for bolt.diy!
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
- https://thinktank.ottomator.ai
14
 
15
 
16
- ## Requested Additions - Feel Free to Contribute!
17
 
18
  - βœ… OpenRouter Integration (@coleam00)
19
  - βœ… Gemini Integration (@jonathands)
@@ -60,7 +73,7 @@ https://thinktank.ottomator.ai
60
  - ⬜ Perplexity Integration
61
  - ⬜ Vertex AI Integration
62
 
63
- ## bolt.diy Features
64
 
65
  - **AI-powered full-stack web development** directly in your browser.
66
  - **Support for multiple LLMs** with an extensible architecture to integrate additional models.
@@ -70,7 +83,7 @@ https://thinktank.ottomator.ai
70
  - **Download projects as ZIP** for easy portability.
71
  - **Integration-ready Docker support** for a hassle-free setup.
72
 
73
- ## Setup bolt.diy
74
 
75
  If you're new to installing software from GitHub, don't worry! If you encounter any issues, feel free to submit an "issue" using the provided links or improve this documentation by forking the repository, editing the instructions, and submitting a pull request. The following instruction will help you get the stable branch up and running on your local machine in no time.
76
 
@@ -305,4 +318,4 @@ Explore upcoming features and priorities on our [Roadmap](https://roadmap.sh/r/o
305
 
306
  ## FAQ
307
 
308
- For answers to common questions, visit our [FAQ Page](FAQ.md).
 
 
 
1
  # bolt.diy (Previously oTToDev)
2
+ [![bolt.diy: AI-Powered Full-Stack Web Development in the Browser](./public/social_preview_index.jpg)](https://bolt.diy)
3
 
4
  Welcome to bolt.diy, the official open source version of Bolt.new (previously known as oTToDev and bolt.new ANY LLM), which allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
5
 
6
+ Check the [bolt.diy Docs](https://stackblitz-labs.github.io/bolt.diy/) for more information.
7
+
8
+ We have also launched an experimental agent called the "bolt.diy Expert" that can answer common questions about bolt.diy. Find it here on the [oTTomator Live Agent Studio](https://studio.ottomator.ai/).
9
 
10
  bolt.diy was originally started by [Cole Medin](https://www.youtube.com/@ColeMedin) but has quickly grown into a massive community effort to build the BEST open source AI coding assistant!
11
 
12
+ ## Table of Contents
13
+
14
+ - [Join the Community](#join-the-community)
15
+ - [Requested Additions](#requested-additions)
16
+ - [Features](#features)
17
+ - [Setup](#setup)
18
+ - [Run the Application](#run-the-application)
19
+ - [Available Scripts](#available-scripts)
20
+ - [Contributing](#contributing)
21
+ - [Roadmap](#roadmap)
22
+ - [FAQ](#faq)
23
+
24
+ ## Join the community
25
 
26
+ [Join the bolt.diy community here, in the thinktank on ottomator.ai!](https://thinktank.ottomator.ai)
27
 
28
 
29
+ ## Requested Additions
30
 
31
  - βœ… OpenRouter Integration (@coleam00)
32
  - βœ… Gemini Integration (@jonathands)
 
73
  - ⬜ Perplexity Integration
74
  - ⬜ Vertex AI Integration
75
 
76
+ ## Features
77
 
78
  - **AI-powered full-stack web development** directly in your browser.
79
  - **Support for multiple LLMs** with an extensible architecture to integrate additional models.
 
83
  - **Download projects as ZIP** for easy portability.
84
  - **Integration-ready Docker support** for a hassle-free setup.
85
 
86
+ ## Setup
87
 
88
  If you're new to installing software from GitHub, don't worry! If you encounter any issues, feel free to submit an "issue" using the provided links or improve this documentation by forking the repository, editing the instructions, and submitting a pull request. The following instruction will help you get the stable branch up and running on your local machine in no time.
89
 
 
318
 
319
  ## FAQ
320
 
321
+ For answers to common questions, issues, and to see a list of recommended models, visit our [FAQ Page](FAQ.md).
app/components/chat/BaseChat.tsx CHANGED
@@ -119,6 +119,9 @@ export const BaseChat = React.forwardRef<HTMLDivElement, BaseChatProps>(
119
 
120
  useEffect(() => {
121
  // Load API keys from cookies on component mount
 
 
 
122
  try {
123
  const storedApiKeys = Cookies.get('apiKeys');
124
 
@@ -127,6 +130,7 @@ export const BaseChat = React.forwardRef<HTMLDivElement, BaseChatProps>(
127
 
128
  if (typeof parsedKeys === 'object' && parsedKeys !== null) {
129
  setApiKeys(parsedKeys);
 
130
  }
131
  }
132
  } catch (error) {
@@ -155,7 +159,7 @@ export const BaseChat = React.forwardRef<HTMLDivElement, BaseChatProps>(
155
  Cookies.remove('providers');
156
  }
157
 
158
- initializeModelList(providerSettings).then((modelList) => {
159
  setModelList(modelList);
160
  });
161
 
 
119
 
120
  useEffect(() => {
121
  // Load API keys from cookies on component mount
122
+
123
+ let parsedApiKeys: Record<string, string> | undefined = {};
124
+
125
  try {
126
  const storedApiKeys = Cookies.get('apiKeys');
127
 
 
130
 
131
  if (typeof parsedKeys === 'object' && parsedKeys !== null) {
132
  setApiKeys(parsedKeys);
133
+ parsedApiKeys = parsedKeys;
134
  }
135
  }
136
  } catch (error) {
 
159
  Cookies.remove('providers');
160
  }
161
 
162
+ initializeModelList({ apiKeys: parsedApiKeys, providerSettings }).then((modelList) => {
163
  setModelList(modelList);
164
  });
165
 
app/components/settings/SettingsWindow.tsx CHANGED
@@ -63,7 +63,7 @@ export const SettingsWindow = ({ open, onClose }: SettingsProps) => {
63
  variants={dialogBackdropVariants}
64
  />
65
  </RadixDialog.Overlay>
66
- <RadixDialog.Content asChild>
67
  <motion.div
68
  className="fixed top-[50%] left-[50%] z-max h-[85vh] w-[90vw] max-w-[900px] translate-x-[-50%] translate-y-[-50%] border border-bolt-elements-borderColor rounded-lg shadow-lg focus:outline-none overflow-hidden"
69
  initial="closed"
 
63
  variants={dialogBackdropVariants}
64
  />
65
  </RadixDialog.Overlay>
66
+ <RadixDialog.Content aria-describedby={undefined} asChild>
67
  <motion.div
68
  className="fixed top-[50%] left-[50%] z-max h-[85vh] w-[90vw] max-w-[900px] translate-x-[-50%] translate-y-[-50%] border border-bolt-elements-borderColor rounded-lg shadow-lg focus:outline-none overflow-hidden"
69
  initial="closed"
app/components/settings/debug/DebugTab.tsx CHANGED
@@ -2,6 +2,7 @@ import React, { useCallback, useEffect, useState } from 'react';
2
  import { useSettings } from '~/lib/hooks/useSettings';
3
  import commit from '~/commit.json';
4
  import { toast } from 'react-toastify';
 
5
 
6
  interface ProviderStatus {
7
  name: string;
@@ -236,7 +237,7 @@ const checkProviderStatus = async (url: string | null, providerName: string): Pr
236
  }
237
 
238
  // Try different endpoints based on provider
239
- const checkUrls = [`${url}/api/health`, `${url}/v1/models`];
240
  console.log(`[Debug] Checking additional endpoints:`, checkUrls);
241
 
242
  const results = await Promise.all(
@@ -321,14 +322,16 @@ export default function DebugTab() {
321
  .filter(([, provider]) => LOCAL_PROVIDERS.includes(provider.name))
322
  .map(async ([, provider]) => {
323
  const envVarName =
324
- provider.name.toLowerCase() === 'ollama'
325
- ? 'OLLAMA_API_BASE_URL'
326
- : provider.name.toLowerCase() === 'lmstudio'
327
- ? 'LMSTUDIO_API_BASE_URL'
328
- : `REACT_APP_${provider.name.toUpperCase()}_URL`;
329
 
330
  // Access environment variables through import.meta.env
331
- const url = import.meta.env[envVarName] || provider.settings.baseUrl || null; // Ensure baseUrl is used
 
 
 
 
 
 
332
  console.log(`[Debug] Using URL for ${provider.name}:`, url, `(from ${envVarName})`);
333
 
334
  const status = await checkProviderStatus(url, provider.name);
 
2
  import { useSettings } from '~/lib/hooks/useSettings';
3
  import commit from '~/commit.json';
4
  import { toast } from 'react-toastify';
5
+ import { providerBaseUrlEnvKeys } from '~/utils/constants';
6
 
7
  interface ProviderStatus {
8
  name: string;
 
237
  }
238
 
239
  // Try different endpoints based on provider
240
+ const checkUrls = [`${url}/api/health`, url.endsWith('v1') ? `${url}/models` : `${url}/v1/models`];
241
  console.log(`[Debug] Checking additional endpoints:`, checkUrls);
242
 
243
  const results = await Promise.all(
 
322
  .filter(([, provider]) => LOCAL_PROVIDERS.includes(provider.name))
323
  .map(async ([, provider]) => {
324
  const envVarName =
325
+ providerBaseUrlEnvKeys[provider.name].baseUrlKey || `REACT_APP_${provider.name.toUpperCase()}_URL`;
 
 
 
 
326
 
327
  // Access environment variables through import.meta.env
328
+ let settingsUrl = provider.settings.baseUrl;
329
+
330
+ if (settingsUrl && settingsUrl.trim().length === 0) {
331
+ settingsUrl = undefined;
332
+ }
333
+
334
+ const url = settingsUrl || import.meta.env[envVarName] || null; // Ensure baseUrl is used
335
  console.log(`[Debug] Using URL for ${provider.name}:`, url, `(from ${envVarName})`);
336
 
337
  const status = await checkProviderStatus(url, provider.name);
app/components/settings/providers/ProvidersTab.tsx CHANGED
@@ -7,6 +7,7 @@ import { logStore } from '~/lib/stores/logs';
7
 
8
  // Import a default fallback icon
9
  import DefaultIcon from '/icons/Default.svg'; // Adjust the path as necessary
 
10
 
11
  export default function ProvidersTab() {
12
  const { providers, updateProviderSettings, isLocalModel } = useSettings();
@@ -33,9 +34,87 @@ export default function ProvidersTab() {
33
 
34
  newFilteredProviders.sort((a, b) => a.name.localeCompare(b.name));
35
 
36
- setFilteredProviders(newFilteredProviders);
 
 
 
 
37
  }, [providers, searchTerm, isLocalModel]);
38
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
  return (
40
  <div className="p-4">
41
  <div className="flex mb-4">
@@ -47,60 +126,24 @@ export default function ProvidersTab() {
47
  className="w-full bg-white dark:bg-bolt-elements-background-depth-4 relative px-2 py-1.5 rounded-md focus:outline-none placeholder-bolt-elements-textTertiary text-bolt-elements-textPrimary dark:text-bolt-elements-textPrimary border border-bolt-elements-borderColor"
48
  />
49
  </div>
50
- {filteredProviders.map((provider) => (
51
- <div
52
- key={provider.name}
53
- className="flex flex-col mb-2 provider-item hover:bg-bolt-elements-bg-depth-3 p-4 rounded-lg border border-bolt-elements-borderColor "
54
- >
55
- <div className="flex items-center justify-between mb-2">
56
- <div className="flex items-center gap-2">
57
- <img
58
- src={`/icons/${provider.name}.svg`} // Attempt to load the specific icon
59
- onError={(e) => {
60
- // Fallback to default icon on error
61
- e.currentTarget.src = DefaultIcon;
62
- }}
63
- alt={`${provider.name} icon`}
64
- className="w-6 h-6 dark:invert"
65
- />
66
- <span className="text-bolt-elements-textPrimary">{provider.name}</span>
67
- </div>
68
- <Switch
69
- className="ml-auto"
70
- checked={provider.settings.enabled}
71
- onCheckedChange={(enabled) => {
72
- updateProviderSettings(provider.name, { ...provider.settings, enabled });
73
-
74
- if (enabled) {
75
- logStore.logProvider(`Provider ${provider.name} enabled`, { provider: provider.name });
76
- } else {
77
- logStore.logProvider(`Provider ${provider.name} disabled`, { provider: provider.name });
78
- }
79
- }}
80
- />
81
  </div>
82
- {/* Base URL input for configurable providers */}
83
- {URL_CONFIGURABLE_PROVIDERS.includes(provider.name) && provider.settings.enabled && (
84
- <div className="mt-2">
85
- <label className="block text-sm text-bolt-elements-textSecondary mb-1">Base URL:</label>
86
- <input
87
- type="text"
88
- value={provider.settings.baseUrl || ''}
89
- onChange={(e) => {
90
- const newBaseUrl = e.target.value;
91
- updateProviderSettings(provider.name, { ...provider.settings, baseUrl: newBaseUrl });
92
- logStore.logProvider(`Base URL updated for ${provider.name}`, {
93
- provider: provider.name,
94
- baseUrl: newBaseUrl,
95
- });
96
- }}
97
- placeholder={`Enter ${provider.name} base URL`}
98
- className="w-full bg-white dark:bg-bolt-elements-background-depth-4 relative px-2 py-1.5 rounded-md focus:outline-none placeholder-bolt-elements-textTertiary text-bolt-elements-textPrimary dark:text-bolt-elements-textPrimary border border-bolt-elements-borderColor"
99
- />
100
- </div>
101
- )}
102
  </div>
103
- ))}
104
  </div>
105
  );
106
- }
 
7
 
8
  // Import a default fallback icon
9
  import DefaultIcon from '/icons/Default.svg'; // Adjust the path as necessary
10
+ import { providerBaseUrlEnvKeys } from '~/utils/constants';
11
 
12
  export default function ProvidersTab() {
13
  const { providers, updateProviderSettings, isLocalModel } = useSettings();
 
34
 
35
  newFilteredProviders.sort((a, b) => a.name.localeCompare(b.name));
36
 
37
+ // Split providers into regular and URL-configurable
38
+ const regular = newFilteredProviders.filter(p => !URL_CONFIGURABLE_PROVIDERS.includes(p.name));
39
+ const urlConfigurable = newFilteredProviders.filter(p => URL_CONFIGURABLE_PROVIDERS.includes(p.name));
40
+
41
+ setFilteredProviders([...regular, ...urlConfigurable]);
42
  }, [providers, searchTerm, isLocalModel]);
43
 
44
+ const renderProviderCard = (provider: IProviderConfig) => {
45
+ const envBaseUrlKey = providerBaseUrlEnvKeys[provider.name].baseUrlKey;
46
+ const envBaseUrl = envBaseUrlKey ? import.meta.env[envBaseUrlKey] : undefined;
47
+ const isUrlConfigurable = URL_CONFIGURABLE_PROVIDERS.includes(provider.name);
48
+
49
+ return (
50
+ <div
51
+ key={provider.name}
52
+ className="flex flex-col provider-item hover:bg-bolt-elements-bg-depth-3 p-4 rounded-lg border border-bolt-elements-borderColor"
53
+ >
54
+ <div className="flex items-center justify-between mb-2">
55
+ <div className="flex items-center gap-2">
56
+ <img
57
+ src={`/icons/${provider.name}.svg`}
58
+ onError={(e) => {
59
+ e.currentTarget.src = DefaultIcon;
60
+ }}
61
+ alt={`${provider.name} icon`}
62
+ className="w-6 h-6 dark:invert"
63
+ />
64
+ <span className="text-bolt-elements-textPrimary">{provider.name}</span>
65
+ </div>
66
+ <Switch
67
+ className="ml-auto"
68
+ checked={provider.settings.enabled}
69
+ onCheckedChange={(enabled) => {
70
+ updateProviderSettings(provider.name, { ...provider.settings, enabled });
71
+
72
+ if (enabled) {
73
+ logStore.logProvider(`Provider ${provider.name} enabled`, { provider: provider.name });
74
+ } else {
75
+ logStore.logProvider(`Provider ${provider.name} disabled`, { provider: provider.name });
76
+ }
77
+ }}
78
+ />
79
+ </div>
80
+ {isUrlConfigurable && provider.settings.enabled && (
81
+ <div className="mt-2">
82
+ {envBaseUrl && (
83
+ <label className="block text-xs text-bolt-elements-textSecondary text-green-300 mb-2">
84
+ Set On (.env) : {envBaseUrl}
85
+ </label>
86
+ )}
87
+ <label className="block text-sm text-bolt-elements-textSecondary mb-2">
88
+ {envBaseUrl ? 'Override Base Url' : 'Base URL '}:{' '}
89
+ </label>
90
+ <input
91
+ type="text"
92
+ value={provider.settings.baseUrl || ''}
93
+ onChange={(e) => {
94
+ let newBaseUrl: string | undefined = e.target.value;
95
+
96
+ if (newBaseUrl && newBaseUrl.trim().length === 0) {
97
+ newBaseUrl = undefined;
98
+ }
99
+
100
+ updateProviderSettings(provider.name, { ...provider.settings, baseUrl: newBaseUrl });
101
+ logStore.logProvider(`Base URL updated for ${provider.name}`, {
102
+ provider: provider.name,
103
+ baseUrl: newBaseUrl,
104
+ });
105
+ }}
106
+ placeholder={`Enter ${provider.name} base URL`}
107
+ className="w-full bg-white dark:bg-bolt-elements-background-depth-4 relative px-2 py-1.5 rounded-md focus:outline-none placeholder-bolt-elements-textTertiary text-bolt-elements-textPrimary dark:text-bolt-elements-textPrimary border border-bolt-elements-borderColor"
108
+ />
109
+ </div>
110
+ )}
111
+ </div>
112
+ );
113
+ };
114
+
115
+ const regularProviders = filteredProviders.filter(p => !URL_CONFIGURABLE_PROVIDERS.includes(p.name));
116
+ const urlConfigurableProviders = filteredProviders.filter(p => URL_CONFIGURABLE_PROVIDERS.includes(p.name));
117
+
118
  return (
119
  <div className="p-4">
120
  <div className="flex mb-4">
 
126
  className="w-full bg-white dark:bg-bolt-elements-background-depth-4 relative px-2 py-1.5 rounded-md focus:outline-none placeholder-bolt-elements-textTertiary text-bolt-elements-textPrimary dark:text-bolt-elements-textPrimary border border-bolt-elements-borderColor"
127
  />
128
  </div>
129
+
130
+ {/* Regular Providers Grid */}
131
+ <div className="grid grid-cols-2 gap-4 mb-8">
132
+ {regularProviders.map(renderProviderCard)}
133
+ </div>
134
+
135
+ {/* URL Configurable Providers Section */}
136
+ {urlConfigurableProviders.length > 0 && (
137
+ <div className="mt-8">
138
+ <h3 className="text-lg font-semibold mb-2 text-bolt-elements-textPrimary">Experimental Providers</h3>
139
+ <p className="text-sm text-bolt-elements-textSecondary mb-4">
140
+ These providers are experimental and allow you to run AI models locally or connect to your own infrastructure. They require additional setup but offer more flexibility.
141
+ </p>
142
+ <div className="space-y-4">
143
+ {urlConfigurableProviders.map(renderProviderCard)}
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
144
  </div>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
145
  </div>
146
+ )}
147
  </div>
148
  );
149
+ }
app/entry.server.tsx CHANGED
@@ -14,7 +14,7 @@ export default async function handleRequest(
14
  remixContext: EntryContext,
15
  _loadContext: AppLoadContext,
16
  ) {
17
- await initializeModelList();
18
 
19
  const readable = await renderToReadableStream(<RemixServer context={remixContext} url={request.url} />, {
20
  signal: request.signal,
 
14
  remixContext: EntryContext,
15
  _loadContext: AppLoadContext,
16
  ) {
17
+ await initializeModelList({});
18
 
19
  const readable = await renderToReadableStream(<RemixServer context={remixContext} url={request.url} />, {
20
  signal: request.signal,
app/lib/.server/llm/api-key.ts CHANGED
@@ -1,8 +1,6 @@
1
- /*
2
- * @ts-nocheck
3
- * Preventing TS checks with files presented in the video for a better presentation.
4
- */
5
  import { env } from 'node:process';
 
 
6
 
7
  export function getAPIKey(cloudflareEnv: Env, provider: string, userApiKeys?: Record<string, string>) {
8
  /**
@@ -15,7 +13,20 @@ export function getAPIKey(cloudflareEnv: Env, provider: string, userApiKeys?: Re
15
  return userApiKeys[provider];
16
  }
17
 
18
- // Fall back to environment variables
 
 
 
 
 
 
 
 
 
 
 
 
 
19
  switch (provider) {
20
  case 'Anthropic':
21
  return env.ANTHROPIC_API_KEY || cloudflareEnv.ANTHROPIC_API_KEY;
@@ -50,16 +61,43 @@ export function getAPIKey(cloudflareEnv: Env, provider: string, userApiKeys?: Re
50
  }
51
  }
52
 
53
- export function getBaseURL(cloudflareEnv: Env, provider: string) {
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
  switch (provider) {
55
  case 'Together':
56
- return env.TOGETHER_API_BASE_URL || cloudflareEnv.TOGETHER_API_BASE_URL || 'https://api.together.xyz/v1';
 
 
 
 
 
57
  case 'OpenAILike':
58
- return env.OPENAI_LIKE_API_BASE_URL || cloudflareEnv.OPENAI_LIKE_API_BASE_URL;
59
  case 'LMStudio':
60
- return env.LMSTUDIO_API_BASE_URL || cloudflareEnv.LMSTUDIO_API_BASE_URL || 'http://localhost:1234';
 
 
61
  case 'Ollama': {
62
- let baseUrl = env.OLLAMA_API_BASE_URL || cloudflareEnv.OLLAMA_API_BASE_URL || 'http://localhost:11434';
 
63
 
64
  if (env.RUNNING_IN_DOCKER === 'true') {
65
  baseUrl = baseUrl.replace('localhost', 'host.docker.internal');
 
 
 
 
 
1
  import { env } from 'node:process';
2
+ import type { IProviderSetting } from '~/types/model';
3
+ import { getProviderBaseUrlAndKey } from '~/utils/constants';
4
 
5
  export function getAPIKey(cloudflareEnv: Env, provider: string, userApiKeys?: Record<string, string>) {
6
  /**
 
13
  return userApiKeys[provider];
14
  }
15
 
16
+ const { apiKey } = getProviderBaseUrlAndKey({
17
+ provider,
18
+ apiKeys: userApiKeys,
19
+ providerSettings: undefined,
20
+ serverEnv: cloudflareEnv as any,
21
+ defaultBaseUrlKey: '',
22
+ defaultApiTokenKey: '',
23
+ });
24
+
25
+ if (apiKey) {
26
+ return apiKey;
27
+ }
28
+
29
+ // Fall back to hardcoded environment variables names
30
  switch (provider) {
31
  case 'Anthropic':
32
  return env.ANTHROPIC_API_KEY || cloudflareEnv.ANTHROPIC_API_KEY;
 
61
  }
62
  }
63
 
64
+ export function getBaseURL(cloudflareEnv: Env, provider: string, providerSettings?: Record<string, IProviderSetting>) {
65
+ const { baseUrl } = getProviderBaseUrlAndKey({
66
+ provider,
67
+ apiKeys: {},
68
+ providerSettings,
69
+ serverEnv: cloudflareEnv as any,
70
+ defaultBaseUrlKey: '',
71
+ defaultApiTokenKey: '',
72
+ });
73
+
74
+ if (baseUrl) {
75
+ return baseUrl;
76
+ }
77
+
78
+ let settingBaseUrl = providerSettings?.[provider].baseUrl;
79
+
80
+ if (settingBaseUrl && settingBaseUrl.length == 0) {
81
+ settingBaseUrl = undefined;
82
+ }
83
+
84
  switch (provider) {
85
  case 'Together':
86
+ return (
87
+ settingBaseUrl ||
88
+ env.TOGETHER_API_BASE_URL ||
89
+ cloudflareEnv.TOGETHER_API_BASE_URL ||
90
+ 'https://api.together.xyz/v1'
91
+ );
92
  case 'OpenAILike':
93
+ return settingBaseUrl || env.OPENAI_LIKE_API_BASE_URL || cloudflareEnv.OPENAI_LIKE_API_BASE_URL;
94
  case 'LMStudio':
95
+ return (
96
+ settingBaseUrl || env.LMSTUDIO_API_BASE_URL || cloudflareEnv.LMSTUDIO_API_BASE_URL || 'http://localhost:1234'
97
+ );
98
  case 'Ollama': {
99
+ let baseUrl =
100
+ settingBaseUrl || env.OLLAMA_API_BASE_URL || cloudflareEnv.OLLAMA_API_BASE_URL || 'http://localhost:11434';
101
 
102
  if (env.RUNNING_IN_DOCKER === 'true') {
103
  baseUrl = baseUrl.replace('localhost', 'host.docker.internal');
app/lib/.server/llm/model.ts CHANGED
@@ -140,7 +140,7 @@ export function getPerplexityModel(apiKey: OptionalApiKey, model: string) {
140
  export function getModel(
141
  provider: string,
142
  model: string,
143
- env: Env,
144
  apiKeys?: Record<string, string>,
145
  providerSettings?: Record<string, IProviderSetting>,
146
  ) {
@@ -148,9 +148,12 @@ export function getModel(
148
  * let apiKey; // Declare first
149
  * let baseURL;
150
  */
 
151
 
152
- const apiKey = getAPIKey(env, provider, apiKeys); // Then assign
153
- const baseURL = providerSettings?.[provider].baseUrl || getBaseURL(env, provider);
 
 
154
 
155
  switch (provider) {
156
  case 'Anthropic':
 
140
  export function getModel(
141
  provider: string,
142
  model: string,
143
+ serverEnv: Env,
144
  apiKeys?: Record<string, string>,
145
  providerSettings?: Record<string, IProviderSetting>,
146
  ) {
 
148
  * let apiKey; // Declare first
149
  * let baseURL;
150
  */
151
+ // console.log({provider,model});
152
 
153
+ const apiKey = getAPIKey(serverEnv, provider, apiKeys); // Then assign
154
+ const baseURL = getBaseURL(serverEnv, provider, providerSettings);
155
+
156
+ // console.log({apiKey,baseURL});
157
 
158
  switch (provider) {
159
  case 'Anthropic':
app/lib/.server/llm/stream-text.ts CHANGED
@@ -151,10 +151,13 @@ export async function streamText(props: {
151
  providerSettings?: Record<string, IProviderSetting>;
152
  promptId?: string;
153
  }) {
154
- const { messages, env, options, apiKeys, files, providerSettings, promptId } = props;
 
 
 
155
  let currentModel = DEFAULT_MODEL;
156
  let currentProvider = DEFAULT_PROVIDER.name;
157
- const MODEL_LIST = await getModelList(apiKeys || {}, providerSettings);
158
  const processedMessages = messages.map((message) => {
159
  if (message.role === 'user') {
160
  const { model, provider, content } = extractPropertiesFromMessage(message);
@@ -196,7 +199,7 @@ export async function streamText(props: {
196
  }
197
 
198
  return _streamText({
199
- model: getModel(currentProvider, currentModel, env, apiKeys, providerSettings) as any,
200
  system: systemPrompt,
201
  maxTokens: dynamicMaxTokens,
202
  messages: convertToCoreMessages(processedMessages as any),
 
151
  providerSettings?: Record<string, IProviderSetting>;
152
  promptId?: string;
153
  }) {
154
+ const { messages, env: serverEnv, options, apiKeys, files, providerSettings, promptId } = props;
155
+
156
+ // console.log({serverEnv});
157
+
158
  let currentModel = DEFAULT_MODEL;
159
  let currentProvider = DEFAULT_PROVIDER.name;
160
+ const MODEL_LIST = await getModelList({ apiKeys, providerSettings, serverEnv: serverEnv as any });
161
  const processedMessages = messages.map((message) => {
162
  if (message.role === 'user') {
163
  const { model, provider, content } = extractPropertiesFromMessage(message);
 
199
  }
200
 
201
  return _streamText({
202
+ model: getModel(currentProvider, currentModel, serverEnv, apiKeys, providerSettings) as any,
203
  system: systemPrompt,
204
  maxTokens: dynamicMaxTokens,
205
  messages: convertToCoreMessages(processedMessages as any),
app/lib/hooks/useEditChatDescription.ts CHANGED
@@ -92,6 +92,7 @@ export function useEditChatDescription({
92
  }
93
 
94
  const lengthValid = trimmedDesc.length > 0 && trimmedDesc.length <= 100;
 
95
  // Allow letters, numbers, spaces, and common punctuation but exclude characters that could cause issues
96
  const characterValid = /^[a-zA-Z0-9\s\-_.,!?()[\]{}'"]+$/.test(trimmedDesc);
97
 
 
92
  }
93
 
94
  const lengthValid = trimmedDesc.length > 0 && trimmedDesc.length <= 100;
95
+
96
  // Allow letters, numbers, spaces, and common punctuation but exclude characters that could cause issues
97
  const characterValid = /^[a-zA-Z0-9\s\-_.,!?()[\]{}'"]+$/.test(trimmedDesc);
98
 
app/types/model.ts CHANGED
@@ -3,7 +3,12 @@ import type { ModelInfo } from '~/utils/types';
3
  export type ProviderInfo = {
4
  staticModels: ModelInfo[];
5
  name: string;
6
- getDynamicModels?: (apiKeys?: Record<string, string>, providerSettings?: IProviderSetting) => Promise<ModelInfo[]>;
 
 
 
 
 
7
  getApiKeyLink?: string;
8
  labelForGetApiKey?: string;
9
  icon?: string;
 
3
  export type ProviderInfo = {
4
  staticModels: ModelInfo[];
5
  name: string;
6
+ getDynamicModels?: (
7
+ providerName: string,
8
+ apiKeys?: Record<string, string>,
9
+ providerSettings?: IProviderSetting,
10
+ serverEnv?: Record<string, string>,
11
+ ) => Promise<ModelInfo[]>;
12
  getApiKeyLink?: string;
13
  labelForGetApiKey?: string;
14
  icon?: string;
app/utils/constants.ts CHANGED
@@ -220,7 +220,6 @@ const PROVIDER_LIST: ProviderInfo[] = [
220
  ],
221
  getApiKeyLink: 'https://huggingface.co/settings/tokens',
222
  },
223
-
224
  {
225
  name: 'OpenAI',
226
  staticModels: [
@@ -233,7 +232,10 @@ const PROVIDER_LIST: ProviderInfo[] = [
233
  },
234
  {
235
  name: 'xAI',
236
- staticModels: [{ name: 'grok-beta', label: 'xAI Grok Beta', provider: 'xAI', maxTokenAllowed: 8000 }],
 
 
 
237
  getApiKeyLink: 'https://docs.x.ai/docs/quickstart#creating-an-api-key',
238
  },
239
  {
@@ -319,44 +321,130 @@ const PROVIDER_LIST: ProviderInfo[] = [
319
  },
320
  ];
321
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
322
  export const DEFAULT_PROVIDER = PROVIDER_LIST[0];
323
 
324
  const staticModels: ModelInfo[] = PROVIDER_LIST.map((p) => p.staticModels).flat();
325
 
326
  export let MODEL_LIST: ModelInfo[] = [...staticModels];
327
 
328
- export async function getModelList(
329
- apiKeys: Record<string, string>,
330
- providerSettings?: Record<string, IProviderSetting>,
331
- ) {
 
 
 
332
  MODEL_LIST = [
333
  ...(
334
  await Promise.all(
335
  PROVIDER_LIST.filter(
336
  (p): p is ProviderInfo & { getDynamicModels: () => Promise<ModelInfo[]> } => !!p.getDynamicModels,
337
- ).map((p) => p.getDynamicModels(apiKeys, providerSettings?.[p.name])),
338
  )
339
  ).flat(),
340
  ...staticModels,
341
  ];
 
342
  return MODEL_LIST;
343
  }
344
 
345
- async function getTogetherModels(apiKeys?: Record<string, string>, settings?: IProviderSetting): Promise<ModelInfo[]> {
 
 
 
 
 
346
  try {
347
- const baseUrl = settings?.baseUrl || import.meta.env.TOGETHER_API_BASE_URL || '';
348
- const provider = 'Together';
 
 
 
 
 
 
349
 
350
  if (!baseUrl) {
351
  return [];
352
  }
353
 
354
- let apiKey = import.meta.env.OPENAI_LIKE_API_KEY ?? '';
355
-
356
- if (apiKeys && apiKeys[provider]) {
357
- apiKey = apiKeys[provider];
358
- }
359
-
360
  if (!apiKey) {
361
  return [];
362
  }
@@ -374,7 +462,7 @@ async function getTogetherModels(apiKeys?: Record<string, string>, settings?: IP
374
  label: `${m.display_name} - in:$${m.pricing.input.toFixed(
375
  2,
376
  )} out:$${m.pricing.output.toFixed(2)} - context ${Math.floor(m.context_length / 1000)}k`,
377
- provider,
378
  maxTokenAllowed: 8000,
379
  }));
380
  } catch (e) {
@@ -383,24 +471,40 @@ async function getTogetherModels(apiKeys?: Record<string, string>, settings?: IP
383
  }
384
  }
385
 
386
- const getOllamaBaseUrl = (settings?: IProviderSetting) => {
387
- const defaultBaseUrl = settings?.baseUrl || import.meta.env.OLLAMA_API_BASE_URL || 'http://localhost:11434';
 
 
 
 
 
 
388
 
389
  // Check if we're in the browser
390
  if (typeof window !== 'undefined') {
391
  // Frontend always uses localhost
392
- return defaultBaseUrl;
393
  }
394
 
395
  // Backend: Check if we're running in Docker
396
  const isDocker = process.env.RUNNING_IN_DOCKER === 'true';
397
 
398
- return isDocker ? defaultBaseUrl.replace('localhost', 'host.docker.internal') : defaultBaseUrl;
399
  };
400
 
401
- async function getOllamaModels(apiKeys?: Record<string, string>, settings?: IProviderSetting): Promise<ModelInfo[]> {
 
 
 
 
 
402
  try {
403
- const baseUrl = getOllamaBaseUrl(settings);
 
 
 
 
 
404
  const response = await fetch(`${baseUrl}/api/tags`);
405
  const data = (await response.json()) as OllamaApiResponse;
406
 
@@ -419,22 +523,25 @@ async function getOllamaModels(apiKeys?: Record<string, string>, settings?: IPro
419
  }
420
 
421
  async function getOpenAILikeModels(
 
422
  apiKeys?: Record<string, string>,
423
  settings?: IProviderSetting,
 
424
  ): Promise<ModelInfo[]> {
425
  try {
426
- const baseUrl = settings?.baseUrl || import.meta.env.OPENAI_LIKE_API_BASE_URL || '';
 
 
 
 
 
 
 
427
 
428
  if (!baseUrl) {
429
  return [];
430
  }
431
 
432
- let apiKey = '';
433
-
434
- if (apiKeys && apiKeys.OpenAILike) {
435
- apiKey = apiKeys.OpenAILike;
436
- }
437
-
438
  const response = await fetch(`${baseUrl}/models`, {
439
  headers: {
440
  Authorization: `Bearer ${apiKey}`,
@@ -445,7 +552,7 @@ async function getOpenAILikeModels(
445
  return res.data.map((model: any) => ({
446
  name: model.id,
447
  label: model.id,
448
- provider: 'OpenAILike',
449
  }));
450
  } catch (e) {
451
  console.error('Error getting OpenAILike models:', e);
@@ -486,9 +593,26 @@ async function getOpenRouterModels(): Promise<ModelInfo[]> {
486
  }));
487
  }
488
 
489
- async function getLMStudioModels(_apiKeys?: Record<string, string>, settings?: IProviderSetting): Promise<ModelInfo[]> {
 
 
 
 
 
490
  try {
491
- const baseUrl = settings?.baseUrl || import.meta.env.LMSTUDIO_API_BASE_URL || 'http://localhost:1234';
 
 
 
 
 
 
 
 
 
 
 
 
492
  const response = await fetch(`${baseUrl}/v1/models`);
493
  const data = (await response.json()) as any;
494
 
@@ -503,29 +627,37 @@ async function getLMStudioModels(_apiKeys?: Record<string, string>, settings?: I
503
  }
504
  }
505
 
506
- async function initializeModelList(providerSettings?: Record<string, IProviderSetting>): Promise<ModelInfo[]> {
507
- let apiKeys: Record<string, string> = {};
 
 
 
 
 
508
 
509
- try {
510
- const storedApiKeys = Cookies.get('apiKeys');
 
511
 
512
- if (storedApiKeys) {
513
- const parsedKeys = JSON.parse(storedApiKeys);
514
 
515
- if (typeof parsedKeys === 'object' && parsedKeys !== null) {
516
- apiKeys = parsedKeys;
 
517
  }
 
 
 
518
  }
519
- } catch (error: any) {
520
- logStore.logError('Failed to fetch API keys from cookies', error);
521
- logger.warn(`Failed to fetch apikeys from cookies: ${error?.message}`);
522
  }
 
523
  MODEL_LIST = [
524
  ...(
525
  await Promise.all(
526
  PROVIDER_LIST.filter(
527
  (p): p is ProviderInfo & { getDynamicModels: () => Promise<ModelInfo[]> } => !!p.getDynamicModels,
528
- ).map((p) => p.getDynamicModels(apiKeys, providerSettings?.[p.name])),
529
  )
530
  ).flat(),
531
  ...staticModels,
@@ -534,6 +666,7 @@ async function initializeModelList(providerSettings?: Record<string, IProviderSe
534
  return MODEL_LIST;
535
  }
536
 
 
537
  export {
538
  getOllamaModels,
539
  getOpenAILikeModels,
 
220
  ],
221
  getApiKeyLink: 'https://huggingface.co/settings/tokens',
222
  },
 
223
  {
224
  name: 'OpenAI',
225
  staticModels: [
 
232
  },
233
  {
234
  name: 'xAI',
235
+ staticModels: [
236
+ { name: 'grok-beta', label: 'xAI Grok Beta', provider: 'xAI', maxTokenAllowed: 8000 },
237
+ { name: 'grok-2-1212', label: 'xAI Grok2 1212', provider: 'xAI', maxTokenAllowed: 8000 },
238
+ ],
239
  getApiKeyLink: 'https://docs.x.ai/docs/quickstart#creating-an-api-key',
240
  },
241
  {
 
321
  },
322
  ];
323
 
324
+ export const providerBaseUrlEnvKeys: Record<string, { baseUrlKey?: string; apiTokenKey?: string }> = {
325
+ Anthropic: {
326
+ apiTokenKey: 'ANTHROPIC_API_KEY',
327
+ },
328
+ OpenAI: {
329
+ apiTokenKey: 'OPENAI_API_KEY',
330
+ },
331
+ Groq: {
332
+ apiTokenKey: 'GROQ_API_KEY',
333
+ },
334
+ HuggingFace: {
335
+ apiTokenKey: 'HuggingFace_API_KEY',
336
+ },
337
+ OpenRouter: {
338
+ apiTokenKey: 'OPEN_ROUTER_API_KEY',
339
+ },
340
+ Google: {
341
+ apiTokenKey: 'GOOGLE_GENERATIVE_AI_API_KEY',
342
+ },
343
+ OpenAILike: {
344
+ baseUrlKey: 'OPENAI_LIKE_API_BASE_URL',
345
+ apiTokenKey: 'OPENAI_LIKE_API_KEY',
346
+ },
347
+ Together: {
348
+ baseUrlKey: 'TOGETHER_API_BASE_URL',
349
+ apiTokenKey: 'TOGETHER_API_KEY',
350
+ },
351
+ Deepseek: {
352
+ apiTokenKey: 'DEEPSEEK_API_KEY',
353
+ },
354
+ Mistral: {
355
+ apiTokenKey: 'MISTRAL_API_KEY',
356
+ },
357
+ LMStudio: {
358
+ baseUrlKey: 'LMSTUDIO_API_BASE_URL',
359
+ },
360
+ xAI: {
361
+ apiTokenKey: 'XAI_API_KEY',
362
+ },
363
+ Cohere: {
364
+ apiTokenKey: 'COHERE_API_KEY',
365
+ },
366
+ Perplexity: {
367
+ apiTokenKey: 'PERPLEXITY_API_KEY',
368
+ },
369
+ Ollama: {
370
+ baseUrlKey: 'OLLAMA_API_BASE_URL',
371
+ },
372
+ };
373
+
374
+ export const getProviderBaseUrlAndKey = (options: {
375
+ provider: string;
376
+ apiKeys?: Record<string, string>;
377
+ providerSettings?: IProviderSetting;
378
+ serverEnv?: Record<string, string>;
379
+ defaultBaseUrlKey: string;
380
+ defaultApiTokenKey: string;
381
+ }) => {
382
+ const { provider, apiKeys, providerSettings, serverEnv, defaultBaseUrlKey, defaultApiTokenKey } = options;
383
+ let settingsBaseUrl = providerSettings?.baseUrl;
384
+
385
+ if (settingsBaseUrl && settingsBaseUrl.length == 0) {
386
+ settingsBaseUrl = undefined;
387
+ }
388
+
389
+ const baseUrlKey = providerBaseUrlEnvKeys[provider]?.baseUrlKey || defaultBaseUrlKey;
390
+ const baseUrl = settingsBaseUrl || serverEnv?.[baseUrlKey] || process.env[baseUrlKey] || import.meta.env[baseUrlKey];
391
+
392
+ const apiTokenKey = providerBaseUrlEnvKeys[provider]?.apiTokenKey || defaultApiTokenKey;
393
+ const apiKey =
394
+ apiKeys?.[provider] || serverEnv?.[apiTokenKey] || process.env[apiTokenKey] || import.meta.env[apiTokenKey];
395
+
396
+ return {
397
+ baseUrl,
398
+ apiKey,
399
+ };
400
+ };
401
  export const DEFAULT_PROVIDER = PROVIDER_LIST[0];
402
 
403
  const staticModels: ModelInfo[] = PROVIDER_LIST.map((p) => p.staticModels).flat();
404
 
405
  export let MODEL_LIST: ModelInfo[] = [...staticModels];
406
 
407
+ export async function getModelList(options: {
408
+ apiKeys?: Record<string, string>;
409
+ providerSettings?: Record<string, IProviderSetting>;
410
+ serverEnv?: Record<string, string>;
411
+ }) {
412
+ const { apiKeys, providerSettings, serverEnv } = options;
413
+
414
  MODEL_LIST = [
415
  ...(
416
  await Promise.all(
417
  PROVIDER_LIST.filter(
418
  (p): p is ProviderInfo & { getDynamicModels: () => Promise<ModelInfo[]> } => !!p.getDynamicModels,
419
+ ).map((p) => p.getDynamicModels(p.name, apiKeys, providerSettings?.[p.name], serverEnv)),
420
  )
421
  ).flat(),
422
  ...staticModels,
423
  ];
424
+
425
  return MODEL_LIST;
426
  }
427
 
428
+ async function getTogetherModels(
429
+ name: string,
430
+ apiKeys?: Record<string, string>,
431
+ settings?: IProviderSetting,
432
+ serverEnv: Record<string, string> = {},
433
+ ): Promise<ModelInfo[]> {
434
  try {
435
+ const { baseUrl, apiKey } = getProviderBaseUrlAndKey({
436
+ provider: name,
437
+ apiKeys,
438
+ providerSettings: settings,
439
+ serverEnv,
440
+ defaultBaseUrlKey: 'TOGETHER_API_BASE_URL',
441
+ defaultApiTokenKey: 'TOGETHER_API_KEY',
442
+ });
443
 
444
  if (!baseUrl) {
445
  return [];
446
  }
447
 
 
 
 
 
 
 
448
  if (!apiKey) {
449
  return [];
450
  }
 
462
  label: `${m.display_name} - in:$${m.pricing.input.toFixed(
463
  2,
464
  )} out:$${m.pricing.output.toFixed(2)} - context ${Math.floor(m.context_length / 1000)}k`,
465
+ provider: name,
466
  maxTokenAllowed: 8000,
467
  }));
468
  } catch (e) {
 
471
  }
472
  }
473
 
474
+ const getOllamaBaseUrl = (name: string, settings?: IProviderSetting, serverEnv: Record<string, string> = {}) => {
475
+ const { baseUrl } = getProviderBaseUrlAndKey({
476
+ provider: name,
477
+ providerSettings: settings,
478
+ serverEnv,
479
+ defaultBaseUrlKey: 'OLLAMA_API_BASE_URL',
480
+ defaultApiTokenKey: '',
481
+ });
482
 
483
  // Check if we're in the browser
484
  if (typeof window !== 'undefined') {
485
  // Frontend always uses localhost
486
+ return baseUrl;
487
  }
488
 
489
  // Backend: Check if we're running in Docker
490
  const isDocker = process.env.RUNNING_IN_DOCKER === 'true';
491
 
492
+ return isDocker ? baseUrl.replace('localhost', 'host.docker.internal') : baseUrl;
493
  };
494
 
495
+ async function getOllamaModels(
496
+ name: string,
497
+ _apiKeys?: Record<string, string>,
498
+ settings?: IProviderSetting,
499
+ serverEnv: Record<string, string> = {},
500
+ ): Promise<ModelInfo[]> {
501
  try {
502
+ const baseUrl = getOllamaBaseUrl(name, settings, serverEnv);
503
+
504
+ if (!baseUrl) {
505
+ return [];
506
+ }
507
+
508
  const response = await fetch(`${baseUrl}/api/tags`);
509
  const data = (await response.json()) as OllamaApiResponse;
510
 
 
523
  }
524
 
525
  async function getOpenAILikeModels(
526
+ name: string,
527
  apiKeys?: Record<string, string>,
528
  settings?: IProviderSetting,
529
+ serverEnv: Record<string, string> = {},
530
  ): Promise<ModelInfo[]> {
531
  try {
532
+ const { baseUrl, apiKey } = getProviderBaseUrlAndKey({
533
+ provider: name,
534
+ apiKeys,
535
+ providerSettings: settings,
536
+ serverEnv,
537
+ defaultBaseUrlKey: 'OPENAI_LIKE_API_BASE_URL',
538
+ defaultApiTokenKey: 'OPENAI_LIKE_API_KEY',
539
+ });
540
 
541
  if (!baseUrl) {
542
  return [];
543
  }
544
 
 
 
 
 
 
 
545
  const response = await fetch(`${baseUrl}/models`, {
546
  headers: {
547
  Authorization: `Bearer ${apiKey}`,
 
552
  return res.data.map((model: any) => ({
553
  name: model.id,
554
  label: model.id,
555
+ provider: name,
556
  }));
557
  } catch (e) {
558
  console.error('Error getting OpenAILike models:', e);
 
593
  }));
594
  }
595
 
596
+ async function getLMStudioModels(
597
+ name: string,
598
+ apiKeys?: Record<string, string>,
599
+ settings?: IProviderSetting,
600
+ serverEnv: Record<string, string> = {},
601
+ ): Promise<ModelInfo[]> {
602
  try {
603
+ const { baseUrl } = getProviderBaseUrlAndKey({
604
+ provider: name,
605
+ apiKeys,
606
+ providerSettings: settings,
607
+ serverEnv,
608
+ defaultBaseUrlKey: 'LMSTUDIO_API_BASE_URL',
609
+ defaultApiTokenKey: '',
610
+ });
611
+
612
+ if (!baseUrl) {
613
+ return [];
614
+ }
615
+
616
  const response = await fetch(`${baseUrl}/v1/models`);
617
  const data = (await response.json()) as any;
618
 
 
627
  }
628
  }
629
 
630
+ async function initializeModelList(options: {
631
+ env?: Record<string, string>;
632
+ providerSettings?: Record<string, IProviderSetting>;
633
+ apiKeys?: Record<string, string>;
634
+ }): Promise<ModelInfo[]> {
635
+ const { providerSettings, apiKeys: providedApiKeys, env } = options;
636
+ let apiKeys: Record<string, string> = providedApiKeys || {};
637
 
638
+ if (!providedApiKeys) {
639
+ try {
640
+ const storedApiKeys = Cookies.get('apiKeys');
641
 
642
+ if (storedApiKeys) {
643
+ const parsedKeys = JSON.parse(storedApiKeys);
644
 
645
+ if (typeof parsedKeys === 'object' && parsedKeys !== null) {
646
+ apiKeys = parsedKeys;
647
+ }
648
  }
649
+ } catch (error: any) {
650
+ logStore.logError('Failed to fetch API keys from cookies', error);
651
+ logger.warn(`Failed to fetch apikeys from cookies: ${error?.message}`);
652
  }
 
 
 
653
  }
654
+
655
  MODEL_LIST = [
656
  ...(
657
  await Promise.all(
658
  PROVIDER_LIST.filter(
659
  (p): p is ProviderInfo & { getDynamicModels: () => Promise<ModelInfo[]> } => !!p.getDynamicModels,
660
+ ).map((p) => p.getDynamicModels(p.name, apiKeys, providerSettings?.[p.name], env)),
661
  )
662
  ).flat(),
663
  ...staticModels,
 
666
  return MODEL_LIST;
667
  }
668
 
669
+ // initializeModelList({})
670
  export {
671
  getOllamaModels,
672
  getOpenAILikeModels,
app/utils/shell.ts CHANGED
@@ -105,6 +105,7 @@ export class BoltShell {
105
  * this.#shellInputStream?.write('\x03');
106
  */
107
  this.terminal.input('\x03');
 
108
 
109
  if (state && state.executionPrms) {
110
  await state.executionPrms;
 
105
  * this.#shellInputStream?.write('\x03');
106
  */
107
  this.terminal.input('\x03');
108
+ await this.waitTillOscCode('prompt');
109
 
110
  if (state && state.executionPrms) {
111
  await state.executionPrms;
docs/docs/FAQ.md CHANGED
@@ -1,6 +1,21 @@
1
  # Frequently Asked Questions (FAQ)
2
 
3
- ## How do I get the best results with bolt.diy?
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
 
5
  - **Be specific about your stack**:
6
  Mention the frameworks or libraries you want to use (e.g., Astro, Tailwind, ShadCN) in your initial prompt. This ensures that bolt.diy scaffolds the project according to your preferences.
@@ -14,66 +29,62 @@
14
  - **Batch simple instructions**:
15
  Combine simple tasks into a single prompt to save time and reduce API credit consumption. For example:
16
  *"Change the color scheme, add mobile responsiveness, and restart the dev server."*
 
17
 
18
- ---
19
-
20
- ## How do I contribute to bolt.diy?
21
 
22
  Check out our [Contribution Guide](CONTRIBUTING.md) for more details on how to get involved!
 
23
 
24
- ---
25
-
26
- ## What are the future plans for bolt.diy?
27
 
28
  Visit our [Roadmap](https://roadmap.sh/r/ottodev-roadmap-2ovzo) for the latest updates.
29
  New features and improvements are on the way!
 
30
 
31
- ---
32
-
33
- ## Why are there so many open issues/pull requests?
34
 
35
  bolt.diy began as a small showcase project on @ColeMedin's YouTube channel to explore editing open-source projects with local LLMs. However, it quickly grew into a massive community effort!
36
 
37
- We’re forming a team of maintainers to manage demand and streamline issue resolution. The maintainers are rockstars, and we’re also exploring partnerships to help the project thrive.
 
38
 
39
- ---
40
-
41
- ## How do local LLMs compare to larger models like Claude 3.5 Sonnet for bolt.diy?
42
 
43
  While local LLMs are improving rapidly, larger models like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b still offer the best results for complex applications. Our ongoing focus is to improve prompts, agents, and the platform to better support smaller local LLMs.
 
44
 
45
- ---
46
-
47
- ## Common Errors and Troubleshooting
48
 
49
  ### **"There was an error processing this request"**
50
  This generic error message means something went wrong. Check both:
51
  - The terminal (if you started the app with Docker or `pnpm`).
52
  - The developer console in your browser (press `F12` or right-click > *Inspect*, then go to the *Console* tab).
53
 
54
- ---
55
-
56
  ### **"x-api-key header missing"**
57
  This error is sometimes resolved by restarting the Docker container.
58
- If that doesn’t work, try switching from Docker to `pnpm` or vice versa. We’re actively investigating this issue.
59
-
60
- ---
61
 
62
  ### **Blank preview when running the app**
63
  A blank preview often occurs due to hallucinated bad code or incorrect commands.
64
  To troubleshoot:
65
  - Check the developer console for errors.
66
- - Remember, previews are core functionality, so the app isn’t broken! We’re working on making these errors more transparent.
67
-
68
- ---
69
 
70
  ### **"Everything works, but the results are bad"**
71
  Local LLMs like Qwen-2.5-Coder are powerful for small applications but still experimental for larger projects. For better results, consider using larger models like GPT-4o, Claude 3.5 Sonnet, or DeepSeek Coder V2 236b.
72
 
73
- ---
 
74
 
75
  ### **"Miniflare or Wrangler errors in Windows"**
76
  You will need to make sure you have the latest version of Visual Studio C++ installed (14.40.33816), more information here https://github.com/stackblitz-labs/bolt.diy/issues/19.
 
77
 
78
  ---
79
 
 
1
  # Frequently Asked Questions (FAQ)
2
 
3
+ <details>
4
+ <summary><strong>What are the best models for bolt.diy?</strong></summary>
5
+
6
+ For the best experience with bolt.diy, we recommend using the following models:
7
+
8
+ - **Claude 3.5 Sonnet (old)**: Best overall coder, providing excellent results across all use cases
9
+ - **Gemini 2.0 Flash**: Exceptional speed while maintaining good performance
10
+ - **GPT-4o**: Strong alternative to Claude 3.5 Sonnet with comparable capabilities
11
+ - **DeepSeekCoder V2 236b**: Best open source model (available through OpenRouter, DeepSeek API, or self-hosted)
12
+ - **Qwen 2.5 Coder 32b**: Best model for self-hosting with reasonable hardware requirements
13
+
14
+ **Note**: Models with less than 7b parameters typically lack the capability to properly interact with bolt!
15
+ </details>
16
+
17
+ <details>
18
+ <summary><strong>How do I get the best results with bolt.diy?</strong></summary>
19
 
20
  - **Be specific about your stack**:
21
  Mention the frameworks or libraries you want to use (e.g., Astro, Tailwind, ShadCN) in your initial prompt. This ensures that bolt.diy scaffolds the project according to your preferences.
 
29
  - **Batch simple instructions**:
30
  Combine simple tasks into a single prompt to save time and reduce API credit consumption. For example:
31
  *"Change the color scheme, add mobile responsiveness, and restart the dev server."*
32
+ </details>
33
 
34
+ <details>
35
+ <summary><strong>How do I contribute to bolt.diy?</strong></summary>
 
36
 
37
  Check out our [Contribution Guide](CONTRIBUTING.md) for more details on how to get involved!
38
+ </details>
39
 
40
+ <details>
41
+ <summary><strong>What are the future plans for bolt.diy?</strong></summary>
 
42
 
43
  Visit our [Roadmap](https://roadmap.sh/r/ottodev-roadmap-2ovzo) for the latest updates.
44
  New features and improvements are on the way!
45
+ </details>
46
 
47
+ <details>
48
+ <summary><strong>Why are there so many open issues/pull requests?</strong></summary>
 
49
 
50
  bolt.diy began as a small showcase project on @ColeMedin's YouTube channel to explore editing open-source projects with local LLMs. However, it quickly grew into a massive community effort!
51
 
52
+ We're forming a team of maintainers to manage demand and streamline issue resolution. The maintainers are rockstars, and we're also exploring partnerships to help the project thrive.
53
+ </details>
54
 
55
+ <details>
56
+ <summary><strong>How do local LLMs compare to larger models like Claude 3.5 Sonnet for bolt.diy?</strong></summary>
 
57
 
58
  While local LLMs are improving rapidly, larger models like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b still offer the best results for complex applications. Our ongoing focus is to improve prompts, agents, and the platform to better support smaller local LLMs.
59
+ </details>
60
 
61
+ <details>
62
+ <summary><strong>Common Errors and Troubleshooting</strong></summary>
 
63
 
64
  ### **"There was an error processing this request"**
65
  This generic error message means something went wrong. Check both:
66
  - The terminal (if you started the app with Docker or `pnpm`).
67
  - The developer console in your browser (press `F12` or right-click > *Inspect*, then go to the *Console* tab).
68
 
 
 
69
  ### **"x-api-key header missing"**
70
  This error is sometimes resolved by restarting the Docker container.
71
+ If that doesn't work, try switching from Docker to `pnpm` or vice versa. We're actively investigating this issue.
 
 
72
 
73
  ### **Blank preview when running the app**
74
  A blank preview often occurs due to hallucinated bad code or incorrect commands.
75
  To troubleshoot:
76
  - Check the developer console for errors.
77
+ - Remember, previews are core functionality, so the app isn't broken! We're working on making these errors more transparent.
 
 
78
 
79
  ### **"Everything works, but the results are bad"**
80
  Local LLMs like Qwen-2.5-Coder are powerful for small applications but still experimental for larger projects. For better results, consider using larger models like GPT-4o, Claude 3.5 Sonnet, or DeepSeek Coder V2 236b.
81
 
82
+ ### **"Received structured exception #0xc0000005: access violation"**
83
+ If you are getting this, you are probably on Windows. The fix is generally to update the [Visual C++ Redistributable](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170)
84
 
85
  ### **"Miniflare or Wrangler errors in Windows"**
86
  You will need to make sure you have the latest version of Visual Studio C++ installed (14.40.33816), more information here https://github.com/stackblitz-labs/bolt.diy/issues/19.
87
+ </details>
88
 
89
  ---
90
 
docs/docs/index.md CHANGED
@@ -1,6 +1,21 @@
1
  # Welcome to bolt diy
2
  bolt.diy allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4
  ---
5
 
6
  ## Join the community!
 
1
  # Welcome to bolt diy
2
  bolt.diy allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
3
 
4
+ ## Table of Contents
5
+ - [Join the community!](#join-the-community)
6
+ - [What's bolt.diy](#whats-boltdiy)
7
+ - [What Makes bolt.diy Different](#what-makes-boltdiy-different)
8
+ - [Setup](#setup)
9
+ - [Run with Docker](#run-with-docker)
10
+ - [Using Helper Scripts](#1a-using-helper-scripts)
11
+ - [Direct Docker Build Commands](#1b-direct-docker-build-commands-alternative-to-using-npm-scripts)
12
+ - [Docker Compose with Profiles](#2-docker-compose-with-profiles-to-run-the-container)
13
+ - [Run Without Docker](#run-without-docker)
14
+ - [Adding New LLMs](#adding-new-llms)
15
+ - [Available Scripts](#available-scripts)
16
+ - [Development](#development)
17
+ - [Tips and Tricks](#tips-and-tricks)
18
+
19
  ---
20
 
21
  ## Join the community!
pre-start.cjs CHANGED
@@ -7,4 +7,5 @@ console.log(`
7
  β˜…β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β˜…
8
  `);
9
  console.log('πŸ“ Current Commit Version:', commit);
 
10
  console.log('β˜…β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β˜…');
 
7
  β˜…β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β˜…
8
  `);
9
  console.log('πŸ“ Current Commit Version:', commit);
10
+ console.log(' Please wait until the URL appears here')
11
  console.log('β˜…β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β˜…');
vite.config.ts CHANGED
@@ -28,7 +28,7 @@ export default defineConfig((config) => {
28
  chrome129IssuePlugin(),
29
  config.mode === 'production' && optimizeCssModules({ apply: 'build' }),
30
  ],
31
- envPrefix: ["VITE_", "OPENAI_LIKE_API_", "OLLAMA_API_BASE_URL", "LMSTUDIO_API_BASE_URL","TOGETHER_API_BASE_URL"],
32
  css: {
33
  preprocessorOptions: {
34
  scss: {
 
28
  chrome129IssuePlugin(),
29
  config.mode === 'production' && optimizeCssModules({ apply: 'build' }),
30
  ],
31
+ envPrefix: ["VITE_","OPENAI_LIKE_API_BASE_URL", "OLLAMA_API_BASE_URL", "LMSTUDIO_API_BASE_URL","TOGETHER_API_BASE_URL"],
32
  css: {
33
  preprocessorOptions: {
34
  scss: {