armfuls commited on
Commit
73f4399
·
unverified ·
2 Parent(s): 174714c 13b1321

Merge branch 'coleam00:main' into main

Browse files
.dockerignore ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Ignore Git and GitHub files
2
+ .git
3
+ .github/
4
+
5
+ # Ignore Husky configuration files
6
+ .husky/
7
+
8
+ # Ignore documentation and metadata files
9
+ CONTRIBUTING.md
10
+ LICENSE
11
+ README.md
12
+
13
+ # Ignore environment examples and sensitive info
14
+ .env
15
+ *.local
16
+ *.example
17
+
18
+ # Ignore node modules, logs and cache files
19
+ **/*.log
20
+ **/node_modules
21
+ **/dist
22
+ **/build
23
+ **/.cache
24
+ logs
25
+ dist-ssr
26
+ .DS_Store
.env.example CHANGED
@@ -1,4 +1,4 @@
1
- # Rename this file to .env.local once you have filled in the below environment variables!
2
 
3
  # Get your GROQ API Key here -
4
  # https://console.groq.com/keys
@@ -32,6 +32,9 @@ OLLAMA_API_BASE_URL=http://localhost:11434
32
  # You only need this environment variable set if you want to use OpenAI Like models
33
  OPENAI_LIKE_API_BASE_URL=
34
 
 
 
 
35
  # Get your OpenAI Like API Key
36
  OPENAI_LIKE_API_KEY=
37
 
@@ -40,5 +43,10 @@ OPENAI_LIKE_API_KEY=
40
  # You only need this environment variable set if you want to use Mistral models
41
  MISTRAL_API_KEY=
42
 
 
 
 
 
 
43
  # Include this environment variable if you want more logging for debugging locally
44
  VITE_LOG_LEVEL=debug
 
1
+ # Rename this file to .env once you have filled in the below environment variables!
2
 
3
  # Get your GROQ API Key here -
4
  # https://console.groq.com/keys
 
32
  # You only need this environment variable set if you want to use OpenAI Like models
33
  OPENAI_LIKE_API_BASE_URL=
34
 
35
+ # You only need this environment variable set if you want to use DeepSeek models through their API
36
+ DEEPSEEK_API_KEY=
37
+
38
  # Get your OpenAI Like API Key
39
  OPENAI_LIKE_API_KEY=
40
 
 
43
  # You only need this environment variable set if you want to use Mistral models
44
  MISTRAL_API_KEY=
45
 
46
+ # Get your xAI API key
47
+ # https://x.ai/api
48
+ # You only need this environment variable set if you want to use xAI models
49
+ XAI_API_KEY=
50
+
51
  # Include this environment variable if you want more logging for debugging locally
52
  VITE_LOG_LEVEL=debug
.gitignore CHANGED
@@ -29,3 +29,6 @@ dist-ssr
29
  *.vars
30
  .wrangler
31
  _worker.bundle
 
 
 
 
29
  *.vars
30
  .wrangler
31
  _worker.bundle
32
+
33
+ Modelfile
34
+ modelfiles
CONTRIBUTING.md CHANGED
@@ -8,6 +8,7 @@ First off, thank you for considering contributing to Bolt.new! This fork aims to
8
  - [Pull Request Guidelines](#pull-request-guidelines)
9
  - [Coding Standards](#coding-standards)
10
  - [Development Setup](#development-setup)
 
11
  - [Project Structure](#project-structure)
12
 
13
  ## Code of Conduct
@@ -88,11 +89,113 @@ pnpm run dev
88
 
89
  **Note**: You will need Google Chrome Canary to run this locally if you use Chrome! It's an easy install and a good browser for web development anyway.
90
 
91
- ## Questions?
92
 
93
- For any questions about contributing, please:
94
- 1. Check existing documentation
95
- 2. Search through issues
96
- 3. Create a new issue with the question label
97
 
98
- Thank you for contributing to Bolt.new! 🚀
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  - [Pull Request Guidelines](#pull-request-guidelines)
9
  - [Coding Standards](#coding-standards)
10
  - [Development Setup](#development-setup)
11
+ - [Deploymnt with Docker](#docker-deployment-documentation)
12
  - [Project Structure](#project-structure)
13
 
14
  ## Code of Conduct
 
89
 
90
  **Note**: You will need Google Chrome Canary to run this locally if you use Chrome! It's an easy install and a good browser for web development anyway.
91
 
92
+ ## Testing
93
 
94
+ Run the test suite with:
 
 
 
95
 
96
+ ```bash
97
+ pnpm test
98
+ ```
99
+
100
+ ## Deployment
101
+
102
+ To deploy the application to Cloudflare Pages:
103
+
104
+ ```bash
105
+ pnpm run deploy
106
+ ```
107
+
108
+ Make sure you have the necessary permissions and Wrangler is correctly configured for your Cloudflare account.
109
+
110
+ # Docker Deployment Documentation
111
+
112
+ This guide outlines various methods for building and deploying the application using Docker.
113
+
114
+ ## Build Methods
115
+
116
+ ### 1. Using Helper Scripts
117
+
118
+ NPM scripts are provided for convenient building:
119
+
120
+ ```bash
121
+ # Development build
122
+ npm run dockerbuild
123
+
124
+ # Production build
125
+ npm run dockerbuild:prod
126
+ ```
127
+
128
+ ### 2. Direct Docker Build Commands
129
+
130
+ You can use Docker's target feature to specify the build environment:
131
+
132
+ ```bash
133
+ # Development build
134
+ docker build . --target bolt-ai-development
135
+
136
+ # Production build
137
+ docker build . --target bolt-ai-production
138
+ ```
139
+
140
+ ### 3. Docker Compose with Profiles
141
+
142
+ Use Docker Compose profiles to manage different environments:
143
+
144
+ ```bash
145
+ # Development environment
146
+ docker-compose --profile development up
147
+
148
+ # Production environment
149
+ docker-compose --profile production up
150
+ ```
151
+
152
+ ## Running the Application
153
+
154
+ After building using any of the methods above, run the container with:
155
+
156
+ ```bash
157
+ # Development
158
+ docker run -p 5173:5173 --env-file .env.local bolt-ai:development
159
+
160
+ # Production
161
+ docker run -p 5173:5173 --env-file .env.local bolt-ai:production
162
+ ```
163
+
164
+ ## Deployment with Coolify
165
+
166
+ [Coolify](https://github.com/coollabsio/coolify) provides a straightforward deployment process:
167
+
168
+ 1. Import your Git repository as a new project
169
+ 2. Select your target environment (development/production)
170
+ 3. Choose "Docker Compose" as the Build Pack
171
+ 4. Configure deployment domains
172
+ 5. Set the custom start command:
173
+ ```bash
174
+ docker compose --profile production up
175
+ ```
176
+ 6. Configure environment variables
177
+ - Add necessary AI API keys
178
+ - Adjust other environment variables as needed
179
+ 7. Deploy the application
180
+
181
+ ## VS Code Integration
182
+
183
+ The `docker-compose.yaml` configuration is compatible with VS Code dev containers:
184
+
185
+ 1. Open the command palette in VS Code
186
+ 2. Select the dev container configuration
187
+ 3. Choose the "development" profile from the context menu
188
+
189
+ ## Environment Files
190
+
191
+ Ensure you have the appropriate `.env.local` file configured before running the containers. This file should contain:
192
+ - API keys
193
+ - Environment-specific configurations
194
+ - Other required environment variables
195
+
196
+ ## Notes
197
+
198
+ - Port 5173 is exposed and mapped for both development and production environments
199
+ - Environment variables are loaded from `.env.local`
200
+ - Different profiles (development/production) can be used for different deployment scenarios
201
+ - The configuration supports both local development and production deployment
Dockerfile CHANGED
@@ -1,29 +1,67 @@
1
- # Use an official Node.js runtime as the base image
2
- FROM node:20.15.1
3
 
4
- # Set the working directory in the container
5
  WORKDIR /app
6
 
7
- # Install pnpm
8
- RUN npm install -g [email protected]
9
 
10
- # Copy package.json and pnpm-lock.yaml (if available)
11
- COPY package.json pnpm-lock.yaml* ./
12
 
13
- # Install dependencies
14
- RUN pnpm install
15
-
16
- # Copy the rest of the application code
17
  COPY . .
18
 
19
- # Build the application
20
- RUN pnpm run build
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
- # Make sure bindings.sh is executable
23
- RUN chmod +x bindings.sh
 
 
 
 
 
 
24
 
25
- # Expose the port the app runs on (adjust if you specified a different port)
26
- EXPOSE 3000
 
 
 
 
 
27
 
28
- # Start the application
29
- CMD ["pnpm", "run", "start"]
 
1
+ ARG BASE=node:20.18.0
2
+ FROM ${BASE} AS base
3
 
 
4
  WORKDIR /app
5
 
6
+ # Install dependencies (this step is cached as long as the dependencies don't change)
7
+ COPY package.json pnpm-lock.yaml ./
8
 
9
+ RUN corepack enable pnpm && pnpm install
 
10
 
11
+ # Copy the rest of your app's source code
 
 
 
12
  COPY . .
13
 
14
+ # Expose the port the app runs on
15
+ EXPOSE 5173
16
+
17
+ # Production image
18
+ FROM base AS bolt-ai-production
19
+
20
+ # Define environment variables with default values or let them be overridden
21
+ ARG GROQ_API_KEY
22
+ ARG OPENAI_API_KEY
23
+ ARG ANTHROPIC_API_KEY
24
+ ARG OPEN_ROUTER_API_KEY
25
+ ARG GOOGLE_GENERATIVE_AI_API_KEY
26
+ ARG OLLAMA_API_BASE_URL
27
+ ARG VITE_LOG_LEVEL=debug
28
+
29
+ ENV WRANGLER_SEND_METRICS=false \
30
+ GROQ_API_KEY=${GROQ_API_KEY} \
31
+ OPENAI_API_KEY=${OPENAI_API_KEY} \
32
+ ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY} \
33
+ OPEN_ROUTER_API_KEY=${OPEN_ROUTER_API_KEY} \
34
+ GOOGLE_GENERATIVE_AI_API_KEY=${GOOGLE_GENERATIVE_AI_API_KEY} \
35
+ OLLAMA_API_BASE_URL=${OLLAMA_API_BASE_URL} \
36
+ VITE_LOG_LEVEL=${VITE_LOG_LEVEL}
37
+
38
+ # Pre-configure wrangler to disable metrics
39
+ RUN mkdir -p /root/.config/.wrangler && \
40
+ echo '{"enabled":false}' > /root/.config/.wrangler/metrics.json
41
+
42
+ RUN npm run build
43
+
44
+ CMD [ "pnpm", "run", "dockerstart"]
45
+
46
+ # Development image
47
+ FROM base AS bolt-ai-development
48
 
49
+ # Define the same environment variables for development
50
+ ARG GROQ_API_KEY
51
+ ARG OPENAI_API_KEY
52
+ ARG ANTHROPIC_API_KEY
53
+ ARG OPEN_ROUTER_API_KEY
54
+ ARG GOOGLE_GENERATIVE_AI_API_KEY
55
+ ARG OLLAMA_API_BASE_URL
56
+ ARG VITE_LOG_LEVEL=debug
57
 
58
+ ENV GROQ_API_KEY=${GROQ_API_KEY} \
59
+ OPENAI_API_KEY=${OPENAI_API_KEY} \
60
+ ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY} \
61
+ OPEN_ROUTER_API_KEY=${OPEN_ROUTER_API_KEY} \
62
+ GOOGLE_GENERATIVE_AI_API_KEY=${GOOGLE_GENERATIVE_AI_API_KEY} \
63
+ OLLAMA_API_BASE_URL=${OLLAMA_API_BASE_URL} \
64
+ VITE_LOG_LEVEL=${VITE_LOG_LEVEL}
65
 
66
+ RUN mkdir -p ${WORKDIR}/run
67
+ CMD pnpm run dev --host
README.md CHANGED
@@ -20,6 +20,7 @@ This fork of Bolt.new allows you to choose the LLM that you use for each prompt!
20
  - ✅ Publish projects directly to GitHub (@goncaloalves)
21
  - ⬜ Prevent Bolt from rewriting files as often (Done but need to review PR still)
22
  - ⬜ **HIGH PRIORITY** - Better prompting for smaller LLMs (code window sometimes doesn't start)
 
23
  - ⬜ **HIGH PRIORITY** - Attach images to prompts
24
  - ⬜ **HIGH PRIORITY** - Run agents in the backend as opposed to a single model call
25
  - ⬜ LM Studio Integration
@@ -27,12 +28,17 @@ This fork of Bolt.new allows you to choose the LLM that you use for each prompt!
27
  - ⬜ Azure Open AI API Integration
28
  - ⬜ HuggingFace Integration
29
  - ⬜ Perplexity Integration
 
 
30
  - ⬜ Deploy directly to Vercel/Netlify/other similar platforms
31
- - ⬜ Load local projects into the app
32
  - ⬜ Ability to revert code to earlier version
33
  - ⬜ Prompt caching
 
34
  - ⬜ Ability to enter API keys in the UI
35
  - ⬜ Have LLM plan the project in a MD file for better results/transparency
 
 
 
36
 
37
  # Bolt.new: AI-Powered Full-Stack Web Development in the Browser
38
 
@@ -55,28 +61,47 @@ Whether you’re an experienced developer, a PM, or a designer, Bolt.new allows
55
 
56
  For developers interested in building their own AI-powered development tools with WebContainers, check out the open-source Bolt codebase in this repo!
57
 
58
- ## Prerequisites
59
 
60
- Before you begin, ensure you have the following installed:
61
 
62
- - Node.js (v20.15.1)
63
- - pnpm (v9.4.0)
64
 
65
- ## Setup
66
 
67
- 1. Clone the repository (if you haven't already):
68
 
69
- ```bash
 
 
 
 
 
 
 
 
 
 
70
  git clone https://github.com/coleam00/bolt.new-any-llm.git
71
  ```
72
 
73
- 2. Install dependencies:
74
 
75
- ```bash
76
- pnpm install
 
 
 
 
77
  ```
78
 
79
- 3. Rename `.env.example` to .env.local and add your LLM API keys (you only have to set the ones you want to use and Ollama doesn't need an API key because it runs locally on your computer):
 
 
 
 
 
 
80
 
81
  ```
82
  GROQ_API_KEY=XXX
@@ -90,7 +115,98 @@ Optionally, you can set the debug level:
90
  VITE_LOG_LEVEL=debug
91
  ```
92
 
93
- **Important**: Never commit your `.env.local` file to version control. It's already included in .gitignore.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
94
 
95
  ## Adding New LLMs:
96
 
 
20
  - ✅ Publish projects directly to GitHub (@goncaloalves)
21
  - ⬜ Prevent Bolt from rewriting files as often (Done but need to review PR still)
22
  - ⬜ **HIGH PRIORITY** - Better prompting for smaller LLMs (code window sometimes doesn't start)
23
+ - ⬜ **HIGH PRIORITY** Load local projects into the app
24
  - ⬜ **HIGH PRIORITY** - Attach images to prompts
25
  - ⬜ **HIGH PRIORITY** - Run agents in the backend as opposed to a single model call
26
  - ⬜ LM Studio Integration
 
28
  - ⬜ Azure Open AI API Integration
29
  - ⬜ HuggingFace Integration
30
  - ⬜ Perplexity Integration
31
+ - ⬜ Vertex AI Integration
32
+ - ⬜ Cohere Integration
33
  - ⬜ Deploy directly to Vercel/Netlify/other similar platforms
 
34
  - ⬜ Ability to revert code to earlier version
35
  - ⬜ Prompt caching
36
+ - ⬜ Better prompt enhancing
37
  - ⬜ Ability to enter API keys in the UI
38
  - ⬜ Have LLM plan the project in a MD file for better results/transparency
39
+ - ⬜ VSCode Integration with git-like confirmations
40
+ - ⬜ Upload documents for knowledge - UI design templates, a code base to reference coding style, etc.
41
+ - ⬜ Voice prompting
42
 
43
  # Bolt.new: AI-Powered Full-Stack Web Development in the Browser
44
 
 
61
 
62
  For developers interested in building their own AI-powered development tools with WebContainers, check out the open-source Bolt codebase in this repo!
63
 
64
+ ## Setup
65
 
66
+ Many of you are new users to installing software from Github. If you have any installation troubles reach out and submit an "issue" using the links above, or feel free to enhance this documentation by forking, editing the instructions, and doing a pull request.
67
 
68
+ 1. Install Git from https://git-scm.com/downloads
 
69
 
70
+ 2. Install Node.js from https://nodejs.org/en/download/
71
 
72
+ Pay attention to the installer notes after completion.
73
 
74
+ On all operating systems, the path to Node.js should automatically be added to your system path. But you can check your path if you want to be sure. On Windows, you can search for "edit the system environment variables" in your system, select "Environment Variables..." once you are in the system properties, and then check for a path to Node in your "Path" system variable. On a Mac or Linux machine, it will tell you to check if /usr/local/bin is in your $PATH. To determine if usr/local/bin is included in $PATH open your Terminal and run:
75
+
76
+ ```
77
+ echo $PATH .
78
+ ```
79
+
80
+ If you see usr/local/bin in the output then you're good to go.
81
+
82
+ 3. Clone the repository (if you haven't already) by opening a Terminal window (or CMD with admin permissions) and then typing in this:
83
+
84
+ ```
85
  git clone https://github.com/coleam00/bolt.new-any-llm.git
86
  ```
87
 
88
+ 3. Rename .env.example to .env and add your LLM API keys. You will find this file on a Mac at "[your name]/bold.new-any-llm/.env.example". For Windows and Linux the path will be similar.
89
 
90
+ ![image](https://github.com/user-attachments/assets/7e6a532c-2268-401f-8310-e8d20c731328)
91
+
92
+ If you can't see the file indicated above, its likely you can't view hidden files. On Mac, open a Terminal window and enter this command below. On Windows, you will see the hidden files option in File Explorer Settings. A quick Google search will help you if you are stuck here.
93
+
94
+ ```
95
+ defaults write com.apple.finder AppleShowAllFiles YES
96
  ```
97
 
98
+ **NOTE**: you only have to set the ones you want to use and Ollama doesn't need an API key because it runs locally on your computer:
99
+
100
+ Get your GROQ API Key here: https://console.groq.com/keys
101
+
102
+ Get your Open AI API Key by following these instructions: https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key
103
+
104
+ Get your Anthropic API Key in your account settings: https://console.anthropic.com/settings/keys
105
 
106
  ```
107
  GROQ_API_KEY=XXX
 
115
  VITE_LOG_LEVEL=debug
116
  ```
117
 
118
+ **Important**: Never commit your `.env` file to version control. It's already included in .gitignore.
119
+
120
+ ## Run with Docker
121
+
122
+ Prerequisites:
123
+
124
+ Git and Node.js as mentioned above, as well as Docker: https://www.docker.com/
125
+
126
+ ### 1a. Using Helper Scripts
127
+
128
+ NPM scripts are provided for convenient building:
129
+
130
+ ```bash
131
+ # Development build
132
+ npm run dockerbuild
133
+
134
+ # Production build
135
+ npm run dockerbuild:prod
136
+ ```
137
+
138
+ ### 1b. Direct Docker Build Commands (alternative to using NPM scripts)
139
+
140
+ You can use Docker's target feature to specify the build environment instead of using NPM scripts if you wish:
141
+
142
+ ```bash
143
+ # Development build
144
+ docker build . --target bolt-ai-development
145
+
146
+ # Production build
147
+ docker build . --target bolt-ai-production
148
+ ```
149
+
150
+ ### 2. Docker Compose with Profiles to Run the Container
151
+
152
+ Use Docker Compose profiles to manage different environments:
153
+
154
+ ```bash
155
+ # Development environment
156
+ docker-compose --profile development up
157
+
158
+ # Production environment
159
+ docker-compose --profile production up
160
+ ```
161
+
162
+ When you run the Docker Compose command with the development profile, any changes you
163
+ make on your machine to the code will automatically be reflected in the site running
164
+ on the container (i.e. hot reloading still applies!).
165
+
166
+ ## Run Without Docker
167
+
168
+ 1. Install dependencies using Terminal (or CMD in Windows with admin permissions):
169
+
170
+ ```
171
+ pnpm install
172
+ ```
173
+
174
+ If you get an error saying "command not found: pnpm" or similar, then that means pnpm isn't installed. You can install it via this:
175
+
176
+ ```
177
+ sudo npm install -g pnpm
178
+ ```
179
+
180
+ 2. Start the application with the command:
181
+
182
+ ```bash
183
+ pnpm run dev
184
+ ```
185
+
186
+ ## Super Important Note on Running Ollama Models
187
+
188
+ Ollama models by default only have 2048 tokens for their context window. Even for large models that can easily handle way more.
189
+ This is not a large enough window to handle the Bolt.new/oTToDev prompt! You have to create a version of any model you want
190
+ to use where you specify a larger context window. Luckily it's super easy to do that.
191
+
192
+ All you have to do is:
193
+
194
+ - Create a file called "Modelfile" (no file extension) anywhere on your computer
195
+ - Put in the two lines:
196
+
197
+ ```
198
+ FROM [Ollama model ID such as qwen2.5-coder:7b]
199
+ PARAMETER num_ctx 32768
200
+ ```
201
+
202
+ - Run the command:
203
+
204
+ ```
205
+ ollama create -f Modelfile [your new model ID, can be whatever you want (example: qwen2.5-coder-extra-ctx:7b)]
206
+ ```
207
+
208
+ Now you have a new Ollama model that isn't heavily limited in the context length like Ollama models are by default for some reason.
209
+ You'll see this new model in the list of Ollama models along with all the others you pulled!
210
 
211
  ## Adding New LLMs:
212
 
app/components/chat/APIKeyManager.tsx ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import React, { useState } from 'react';
2
+ import { IconButton } from '~/components/ui/IconButton';
3
+
4
+ interface APIKeyManagerProps {
5
+ provider: string;
6
+ apiKey: string;
7
+ setApiKey: (key: string) => void;
8
+ }
9
+
10
+ export const APIKeyManager: React.FC<APIKeyManagerProps> = ({ provider, apiKey, setApiKey }) => {
11
+ const [isEditing, setIsEditing] = useState(false);
12
+ const [tempKey, setTempKey] = useState(apiKey);
13
+
14
+ const handleSave = () => {
15
+ setApiKey(tempKey);
16
+ setIsEditing(false);
17
+ };
18
+
19
+ return (
20
+ <div className="flex items-center gap-2 mt-2 mb-2">
21
+ <span className="text-sm text-bolt-elements-textSecondary">{provider} API Key:</span>
22
+ {isEditing ? (
23
+ <>
24
+ <input
25
+ type="password"
26
+ value={tempKey}
27
+ onChange={(e) => setTempKey(e.target.value)}
28
+ className="flex-1 p-1 text-sm rounded border border-bolt-elements-borderColor bg-bolt-elements-prompt-background text-bolt-elements-textPrimary focus:outline-none focus:ring-2 focus:ring-bolt-elements-focus"
29
+ />
30
+ <IconButton onClick={handleSave} title="Save API Key">
31
+ <div className="i-ph:check" />
32
+ </IconButton>
33
+ <IconButton onClick={() => setIsEditing(false)} title="Cancel">
34
+ <div className="i-ph:x" />
35
+ </IconButton>
36
+ </>
37
+ ) : (
38
+ <>
39
+ <span className="flex-1 text-sm text-bolt-elements-textPrimary">
40
+ {apiKey ? '••••••••' : 'Not set (will still work if set in .env file)'}
41
+ </span>
42
+ <IconButton onClick={() => setIsEditing(true)} title="Edit API Key">
43
+ <div className="i-ph:pencil-simple" />
44
+ </IconButton>
45
+ </>
46
+ )}
47
+ </div>
48
+ );
49
+ };
app/components/chat/BaseChat.tsx CHANGED
@@ -1,7 +1,7 @@
1
  // @ts-nocheck
2
  // Preventing TS checks with files presented in the video for a better presentation.
3
  import type { Message } from 'ai';
4
- import React, { type RefCallback } from 'react';
5
  import { ClientOnly } from 'remix-utils/client-only';
6
  import { Menu } from '~/components/sidebar/Menu.client';
7
  import { IconButton } from '~/components/ui/IconButton';
@@ -11,6 +11,8 @@ import { MODEL_LIST, DEFAULT_PROVIDER } from '~/utils/constants';
11
  import { Messages } from './Messages.client';
12
  import { SendButton } from './SendButton.client';
13
  import { useState } from 'react';
 
 
14
 
15
  import styles from './BaseChat.module.scss';
16
 
@@ -24,18 +26,17 @@ const EXAMPLE_PROMPTS = [
24
 
25
  const providerList = [...new Set(MODEL_LIST.map((model) => model.provider))]
26
 
27
- const ModelSelector = ({ model, setModel, modelList, providerList }) => {
28
- const [provider, setProvider] = useState(DEFAULT_PROVIDER);
29
  return (
30
- <div className="mb-2">
31
- <select
32
  value={provider}
33
  onChange={(e) => {
34
  setProvider(e.target.value);
35
  const firstModel = [...modelList].find(m => m.provider == e.target.value);
36
  setModel(firstModel ? firstModel.name : '');
37
  }}
38
- className="w-full p-2 rounded-lg border border-bolt-elements-borderColor bg-bolt-elements-prompt-background text-bolt-elements-textPrimary focus:outline-none"
39
  >
40
  {providerList.map((provider) => (
41
  <option key={provider} value={provider}>
@@ -52,7 +53,7 @@ const ModelSelector = ({ model, setModel, modelList, providerList }) => {
52
  <select
53
  value={model}
54
  onChange={(e) => setModel(e.target.value)}
55
- className="w-full p-2 rounded-lg border border-bolt-elements-borderColor bg-bolt-elements-prompt-background text-bolt-elements-textPrimary focus:outline-none"
56
  >
57
  {[...modelList].filter(e => e.provider == provider && e.name).map((modelOption) => (
58
  <option key={modelOption.name} value={modelOption.name}>
@@ -79,6 +80,8 @@ interface BaseChatProps {
79
  input?: string;
80
  model: string;
81
  setModel: (model: string) => void;
 
 
82
  handleStop?: () => void;
83
  sendMessage?: (event: React.UIEvent, messageInput?: string) => void;
84
  handleInputChange?: (event: React.ChangeEvent<HTMLTextAreaElement>) => void;
@@ -100,6 +103,8 @@ export const BaseChat = React.forwardRef<HTMLDivElement, BaseChatProps>(
100
  input = '',
101
  model,
102
  setModel,
 
 
103
  sendMessage,
104
  handleInputChange,
105
  enhancePrompt,
@@ -108,6 +113,40 @@ export const BaseChat = React.forwardRef<HTMLDivElement, BaseChatProps>(
108
  ref,
109
  ) => {
110
  const TEXTAREA_MAX_HEIGHT = chatStarted ? 400 : 200;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
111
 
112
  return (
113
  <div
@@ -122,11 +161,11 @@ export const BaseChat = React.forwardRef<HTMLDivElement, BaseChatProps>(
122
  <div ref={scrollRef} className="flex overflow-y-auto w-full h-full">
123
  <div className={classNames(styles.Chat, 'flex flex-col flex-grow min-w-[var(--chat-min-width)] h-full')}>
124
  {!chatStarted && (
125
- <div id="intro" className="mt-[26vh] max-w-chat mx-auto">
126
- <h1 className="text-5xl text-center font-bold text-bolt-elements-textPrimary mb-2">
127
  Where ideas begin
128
  </h1>
129
- <p className="mb-4 text-center text-bolt-elements-textSecondary">
130
  Bring ideas to life in seconds or get help on existing projects.
131
  </p>
132
  </div>
@@ -157,16 +196,23 @@ export const BaseChat = React.forwardRef<HTMLDivElement, BaseChatProps>(
157
  model={model}
158
  setModel={setModel}
159
  modelList={MODEL_LIST}
 
 
160
  providerList={providerList}
161
  />
 
 
 
 
 
162
  <div
163
  className={classNames(
164
- 'shadow-sm border border-bolt-elements-borderColor bg-bolt-elements-prompt-background backdrop-filter backdrop-blur-[8px] rounded-lg overflow-hidden',
165
  )}
166
  >
167
  <textarea
168
  ref={textareaRef}
169
- className={`w-full pl-4 pt-4 pr-16 focus:outline-none resize-none text-md text-bolt-elements-textPrimary placeholder-bolt-elements-textTertiary bg-transparent`}
170
  onKeyDown={(event) => {
171
  if (event.key === 'Enter') {
172
  if (event.shiftKey) {
@@ -205,12 +251,12 @@ export const BaseChat = React.forwardRef<HTMLDivElement, BaseChatProps>(
205
  />
206
  )}
207
  </ClientOnly>
208
- <div className="flex justify-between text-sm p-4 pt-2">
209
  <div className="flex gap-1 items-center">
210
  <IconButton
211
  title="Enhance prompt"
212
  disabled={input.length === 0 || enhancingPrompt}
213
- className={classNames({
214
  'opacity-100!': enhancingPrompt,
215
  'text-bolt-elements-item-contentAccent! pr-1.5 enabled:hover:bg-bolt-elements-item-backgroundAccent!':
216
  promptEnhanced,
@@ -219,7 +265,7 @@ export const BaseChat = React.forwardRef<HTMLDivElement, BaseChatProps>(
219
  >
220
  {enhancingPrompt ? (
221
  <>
222
- <div className="i-svg-spinners:90-ring-with-bg text-bolt-elements-loader-progress text-xl"></div>
223
  <div className="ml-1.5">Enhancing prompt...</div>
224
  </>
225
  ) : (
@@ -232,7 +278,7 @@ export const BaseChat = React.forwardRef<HTMLDivElement, BaseChatProps>(
232
  </div>
233
  {input.length > 3 ? (
234
  <div className="text-xs text-bolt-elements-textTertiary">
235
- Use <kbd className="kdb">Shift</kbd> + <kbd className="kdb">Return</kbd> for a new line
236
  </div>
237
  ) : null}
238
  </div>
@@ -266,4 +312,4 @@ export const BaseChat = React.forwardRef<HTMLDivElement, BaseChatProps>(
266
  </div>
267
  );
268
  },
269
- );
 
1
  // @ts-nocheck
2
  // Preventing TS checks with files presented in the video for a better presentation.
3
  import type { Message } from 'ai';
4
+ import React, { type RefCallback, useEffect } from 'react';
5
  import { ClientOnly } from 'remix-utils/client-only';
6
  import { Menu } from '~/components/sidebar/Menu.client';
7
  import { IconButton } from '~/components/ui/IconButton';
 
11
  import { Messages } from './Messages.client';
12
  import { SendButton } from './SendButton.client';
13
  import { useState } from 'react';
14
+ import { APIKeyManager } from './APIKeyManager';
15
+ import Cookies from 'js-cookie';
16
 
17
  import styles from './BaseChat.module.scss';
18
 
 
26
 
27
  const providerList = [...new Set(MODEL_LIST.map((model) => model.provider))]
28
 
29
+ const ModelSelector = ({ model, setModel, provider, setProvider, modelList, providerList }) => {
 
30
  return (
31
+ <div className="mb-2 flex gap-2">
32
+ <select
33
  value={provider}
34
  onChange={(e) => {
35
  setProvider(e.target.value);
36
  const firstModel = [...modelList].find(m => m.provider == e.target.value);
37
  setModel(firstModel ? firstModel.name : '');
38
  }}
39
+ className="flex-1 p-2 rounded-lg border border-bolt-elements-borderColor bg-bolt-elements-prompt-background text-bolt-elements-textPrimary focus:outline-none focus:ring-2 focus:ring-bolt-elements-focus transition-all"
40
  >
41
  {providerList.map((provider) => (
42
  <option key={provider} value={provider}>
 
53
  <select
54
  value={model}
55
  onChange={(e) => setModel(e.target.value)}
56
+ className="flex-1 p-2 rounded-lg border border-bolt-elements-borderColor bg-bolt-elements-prompt-background text-bolt-elements-textPrimary focus:outline-none focus:ring-2 focus:ring-bolt-elements-focus transition-all"
57
  >
58
  {[...modelList].filter(e => e.provider == provider && e.name).map((modelOption) => (
59
  <option key={modelOption.name} value={modelOption.name}>
 
80
  input?: string;
81
  model: string;
82
  setModel: (model: string) => void;
83
+ provider: string;
84
+ setProvider: (provider: string) => void;
85
  handleStop?: () => void;
86
  sendMessage?: (event: React.UIEvent, messageInput?: string) => void;
87
  handleInputChange?: (event: React.ChangeEvent<HTMLTextAreaElement>) => void;
 
103
  input = '',
104
  model,
105
  setModel,
106
+ provider,
107
+ setProvider,
108
  sendMessage,
109
  handleInputChange,
110
  enhancePrompt,
 
113
  ref,
114
  ) => {
115
  const TEXTAREA_MAX_HEIGHT = chatStarted ? 400 : 200;
116
+ const [apiKeys, setApiKeys] = useState<Record<string, string>>({});
117
+
118
+ useEffect(() => {
119
+ // Load API keys from cookies on component mount
120
+ try {
121
+ const storedApiKeys = Cookies.get('apiKeys');
122
+ if (storedApiKeys) {
123
+ const parsedKeys = JSON.parse(storedApiKeys);
124
+ if (typeof parsedKeys === 'object' && parsedKeys !== null) {
125
+ setApiKeys(parsedKeys);
126
+ }
127
+ }
128
+ } catch (error) {
129
+ console.error('Error loading API keys from cookies:', error);
130
+ // Clear invalid cookie data
131
+ Cookies.remove('apiKeys');
132
+ }
133
+ }, []);
134
+
135
+ const updateApiKey = (provider: string, key: string) => {
136
+ try {
137
+ const updatedApiKeys = { ...apiKeys, [provider]: key };
138
+ setApiKeys(updatedApiKeys);
139
+ // Save updated API keys to cookies with 30 day expiry and secure settings
140
+ Cookies.set('apiKeys', JSON.stringify(updatedApiKeys), {
141
+ expires: 30, // 30 days
142
+ secure: true, // Only send over HTTPS
143
+ sameSite: 'strict', // Protect against CSRF
144
+ path: '/' // Accessible across the site
145
+ });
146
+ } catch (error) {
147
+ console.error('Error saving API keys to cookies:', error);
148
+ }
149
+ };
150
 
151
  return (
152
  <div
 
161
  <div ref={scrollRef} className="flex overflow-y-auto w-full h-full">
162
  <div className={classNames(styles.Chat, 'flex flex-col flex-grow min-w-[var(--chat-min-width)] h-full')}>
163
  {!chatStarted && (
164
+ <div id="intro" className="mt-[26vh] max-w-chat mx-auto text-center">
165
+ <h1 className="text-6xl font-bold text-bolt-elements-textPrimary mb-4 animate-fade-in">
166
  Where ideas begin
167
  </h1>
168
+ <p className="text-xl mb-8 text-bolt-elements-textSecondary animate-fade-in animation-delay-200">
169
  Bring ideas to life in seconds or get help on existing projects.
170
  </p>
171
  </div>
 
196
  model={model}
197
  setModel={setModel}
198
  modelList={MODEL_LIST}
199
+ provider={provider}
200
+ setProvider={setProvider}
201
  providerList={providerList}
202
  />
203
+ <APIKeyManager
204
+ provider={provider}
205
+ apiKey={apiKeys[provider] || ''}
206
+ setApiKey={(key) => updateApiKey(provider, key)}
207
+ />
208
  <div
209
  className={classNames(
210
+ 'shadow-lg border border-bolt-elements-borderColor bg-bolt-elements-prompt-background backdrop-filter backdrop-blur-[8px] rounded-lg overflow-hidden transition-all',
211
  )}
212
  >
213
  <textarea
214
  ref={textareaRef}
215
+ className={`w-full pl-4 pt-4 pr-16 focus:outline-none focus:ring-2 focus:ring-bolt-elements-focus resize-none text-md text-bolt-elements-textPrimary placeholder-bolt-elements-textTertiary bg-transparent transition-all`}
216
  onKeyDown={(event) => {
217
  if (event.key === 'Enter') {
218
  if (event.shiftKey) {
 
251
  />
252
  )}
253
  </ClientOnly>
254
+ <div className="flex justify-between items-center text-sm p-4 pt-2">
255
  <div className="flex gap-1 items-center">
256
  <IconButton
257
  title="Enhance prompt"
258
  disabled={input.length === 0 || enhancingPrompt}
259
+ className={classNames('transition-all', {
260
  'opacity-100!': enhancingPrompt,
261
  'text-bolt-elements-item-contentAccent! pr-1.5 enabled:hover:bg-bolt-elements-item-backgroundAccent!':
262
  promptEnhanced,
 
265
  >
266
  {enhancingPrompt ? (
267
  <>
268
+ <div className="i-svg-spinners:90-ring-with-bg text-bolt-elements-loader-progress text-xl animate-spin"></div>
269
  <div className="ml-1.5">Enhancing prompt...</div>
270
  </>
271
  ) : (
 
278
  </div>
279
  {input.length > 3 ? (
280
  <div className="text-xs text-bolt-elements-textTertiary">
281
+ Use <kbd className="kdb px-1.5 py-0.5 rounded bg-bolt-elements-background-depth-2">Shift</kbd> + <kbd className="kdb px-1.5 py-0.5 rounded bg-bolt-elements-background-depth-2">Return</kbd> for a new line
282
  </div>
283
  ) : null}
284
  </div>
 
312
  </div>
313
  );
314
  },
315
+ );
app/components/chat/Chat.client.tsx CHANGED
@@ -11,10 +11,11 @@ import { useChatHistory } from '~/lib/persistence';
11
  import { chatStore } from '~/lib/stores/chat';
12
  import { workbenchStore } from '~/lib/stores/workbench';
13
  import { fileModificationsToHTML } from '~/utils/diff';
14
- import { DEFAULT_MODEL } from '~/utils/constants';
15
  import { cubicEasingFn } from '~/utils/easings';
16
  import { createScopedLogger, renderLogger } from '~/utils/logger';
17
  import { BaseChat } from './BaseChat';
 
18
 
19
  const toastAnimation = cssTransition({
20
  enter: 'animated fadeInRight',
@@ -74,13 +75,19 @@ export const ChatImpl = memo(({ initialMessages, storeMessageHistory }: ChatProp
74
 
75
  const [chatStarted, setChatStarted] = useState(initialMessages.length > 0);
76
  const [model, setModel] = useState(DEFAULT_MODEL);
 
77
 
78
  const { showChat } = useStore(chatStore);
79
 
80
  const [animationScope, animate] = useAnimate();
81
 
 
 
82
  const { messages, isLoading, input, handleInputChange, setInput, stop, append } = useChat({
83
  api: '/api/chat',
 
 
 
84
  onError: (error) => {
85
  logger.error('Request failed\n\n', error);
86
  toast.error('There was an error processing your request');
@@ -182,7 +189,7 @@ export const ChatImpl = memo(({ initialMessages, storeMessageHistory }: ChatProp
182
  * manually reset the input and we'd have to manually pass in file attachments. However, those
183
  * aren't relevant here.
184
  */
185
- append({ role: 'user', content: `[Model: ${model}]\n\n${diff}\n\n${_input}` });
186
 
187
  /**
188
  * After sending a new message we reset all modifications since the model
@@ -190,7 +197,7 @@ export const ChatImpl = memo(({ initialMessages, storeMessageHistory }: ChatProp
190
  */
191
  workbenchStore.resetAllFileModifications();
192
  } else {
193
- append({ role: 'user', content: `[Model: ${model}]\n\n${_input}` });
194
  }
195
 
196
  setInput('');
@@ -202,6 +209,13 @@ export const ChatImpl = memo(({ initialMessages, storeMessageHistory }: ChatProp
202
 
203
  const [messageRef, scrollRef] = useSnapScroll();
204
 
 
 
 
 
 
 
 
205
  return (
206
  <BaseChat
207
  ref={animationScope}
@@ -215,6 +229,8 @@ export const ChatImpl = memo(({ initialMessages, storeMessageHistory }: ChatProp
215
  sendMessage={sendMessage}
216
  model={model}
217
  setModel={setModel}
 
 
218
  messageRef={messageRef}
219
  scrollRef={scrollRef}
220
  handleInputChange={handleInputChange}
 
11
  import { chatStore } from '~/lib/stores/chat';
12
  import { workbenchStore } from '~/lib/stores/workbench';
13
  import { fileModificationsToHTML } from '~/utils/diff';
14
+ import { DEFAULT_MODEL, DEFAULT_PROVIDER } from '~/utils/constants';
15
  import { cubicEasingFn } from '~/utils/easings';
16
  import { createScopedLogger, renderLogger } from '~/utils/logger';
17
  import { BaseChat } from './BaseChat';
18
+ import Cookies from 'js-cookie';
19
 
20
  const toastAnimation = cssTransition({
21
  enter: 'animated fadeInRight',
 
75
 
76
  const [chatStarted, setChatStarted] = useState(initialMessages.length > 0);
77
  const [model, setModel] = useState(DEFAULT_MODEL);
78
+ const [provider, setProvider] = useState(DEFAULT_PROVIDER);
79
 
80
  const { showChat } = useStore(chatStore);
81
 
82
  const [animationScope, animate] = useAnimate();
83
 
84
+ const [apiKeys, setApiKeys] = useState<Record<string, string>>({});
85
+
86
  const { messages, isLoading, input, handleInputChange, setInput, stop, append } = useChat({
87
  api: '/api/chat',
88
+ body: {
89
+ apiKeys
90
+ },
91
  onError: (error) => {
92
  logger.error('Request failed\n\n', error);
93
  toast.error('There was an error processing your request');
 
189
  * manually reset the input and we'd have to manually pass in file attachments. However, those
190
  * aren't relevant here.
191
  */
192
+ append({ role: 'user', content: `[Model: ${model}]\n\n[Provider: ${provider}]\n\n${diff}\n\n${_input}` });
193
 
194
  /**
195
  * After sending a new message we reset all modifications since the model
 
197
  */
198
  workbenchStore.resetAllFileModifications();
199
  } else {
200
+ append({ role: 'user', content: `[Model: ${model}]\n\n[Provider: ${provider}]\n\n${_input}` });
201
  }
202
 
203
  setInput('');
 
209
 
210
  const [messageRef, scrollRef] = useSnapScroll();
211
 
212
+ useEffect(() => {
213
+ const storedApiKeys = Cookies.get('apiKeys');
214
+ if (storedApiKeys) {
215
+ setApiKeys(JSON.parse(storedApiKeys));
216
+ }
217
+ }, []);
218
+
219
  return (
220
  <BaseChat
221
  ref={animationScope}
 
229
  sendMessage={sendMessage}
230
  model={model}
231
  setModel={setModel}
232
+ provider={provider}
233
+ setProvider={setProvider}
234
  messageRef={messageRef}
235
  scrollRef={scrollRef}
236
  handleInputChange={handleInputChange}
app/components/chat/UserMessage.tsx CHANGED
@@ -1,7 +1,7 @@
1
  // @ts-nocheck
2
  // Preventing TS checks with files presented in the video for a better presentation.
3
  import { modificationsRegex } from '~/utils/diff';
4
- import { MODEL_REGEX } from '~/utils/constants';
5
  import { Markdown } from './Markdown';
6
 
7
  interface UserMessageProps {
@@ -17,5 +17,5 @@ export function UserMessage({ content }: UserMessageProps) {
17
  }
18
 
19
  function sanitizeUserMessage(content: string) {
20
- return content.replace(modificationsRegex, '').replace(MODEL_REGEX, '').trim();
21
  }
 
1
  // @ts-nocheck
2
  // Preventing TS checks with files presented in the video for a better presentation.
3
  import { modificationsRegex } from '~/utils/diff';
4
+ import { MODEL_REGEX, PROVIDER_REGEX } from '~/utils/constants';
5
  import { Markdown } from './Markdown';
6
 
7
  interface UserMessageProps {
 
17
  }
18
 
19
  function sanitizeUserMessage(content: string) {
20
+ return content.replace(modificationsRegex, '').replace(MODEL_REGEX, 'Using: $1').replace(PROVIDER_REGEX, ' ($1)\n\n').trim();
21
  }
app/lib/.server/llm/api-key.ts CHANGED
@@ -2,12 +2,18 @@
2
  // Preventing TS checks with files presented in the video for a better presentation.
3
  import { env } from 'node:process';
4
 
5
- export function getAPIKey(cloudflareEnv: Env, provider: string) {
6
  /**
7
  * The `cloudflareEnv` is only used when deployed or when previewing locally.
8
  * In development the environment variables are available through `env`.
9
  */
10
 
 
 
 
 
 
 
11
  switch (provider) {
12
  case 'Anthropic':
13
  return env.ANTHROPIC_API_KEY || cloudflareEnv.ANTHROPIC_API_KEY;
@@ -25,6 +31,8 @@ export function getAPIKey(cloudflareEnv: Env, provider: string) {
25
  return env.MISTRAL_API_KEY || cloudflareEnv.MISTRAL_API_KEY;
26
  case "OpenAILike":
27
  return env.OPENAI_LIKE_API_KEY || cloudflareEnv.OPENAI_LIKE_API_KEY;
 
 
28
  default:
29
  return "";
30
  }
@@ -35,7 +43,11 @@ export function getBaseURL(cloudflareEnv: Env, provider: string) {
35
  case 'OpenAILike':
36
  return env.OPENAI_LIKE_API_BASE_URL || cloudflareEnv.OPENAI_LIKE_API_BASE_URL;
37
  case 'Ollama':
38
- return env.OLLAMA_API_BASE_URL || cloudflareEnv.OLLAMA_API_BASE_URL || "http://localhost:11434";
 
 
 
 
39
  default:
40
  return "";
41
  }
 
2
  // Preventing TS checks with files presented in the video for a better presentation.
3
  import { env } from 'node:process';
4
 
5
+ export function getAPIKey(cloudflareEnv: Env, provider: string, userApiKeys?: Record<string, string>) {
6
  /**
7
  * The `cloudflareEnv` is only used when deployed or when previewing locally.
8
  * In development the environment variables are available through `env`.
9
  */
10
 
11
+ // First check user-provided API keys
12
+ if (userApiKeys?.[provider]) {
13
+ return userApiKeys[provider];
14
+ }
15
+
16
+ // Fall back to environment variables
17
  switch (provider) {
18
  case 'Anthropic':
19
  return env.ANTHROPIC_API_KEY || cloudflareEnv.ANTHROPIC_API_KEY;
 
31
  return env.MISTRAL_API_KEY || cloudflareEnv.MISTRAL_API_KEY;
32
  case "OpenAILike":
33
  return env.OPENAI_LIKE_API_KEY || cloudflareEnv.OPENAI_LIKE_API_KEY;
34
+ case "xAI":
35
+ return env.XAI_API_KEY || cloudflareEnv.XAI_API_KEY;
36
  default:
37
  return "";
38
  }
 
43
  case 'OpenAILike':
44
  return env.OPENAI_LIKE_API_BASE_URL || cloudflareEnv.OPENAI_LIKE_API_BASE_URL;
45
  case 'Ollama':
46
+ let baseUrl = env.OLLAMA_API_BASE_URL || cloudflareEnv.OLLAMA_API_BASE_URL || "http://localhost:11434";
47
+ if (env.RUNNING_IN_DOCKER === 'true') {
48
+ baseUrl = baseUrl.replace("localhost", "host.docker.internal");
49
+ }
50
+ return baseUrl;
51
  default:
52
  return "";
53
  }
app/lib/.server/llm/model.ts CHANGED
@@ -58,7 +58,10 @@ export function getGroqModel(apiKey: string, model: string) {
58
  }
59
 
60
  export function getOllamaModel(baseURL: string, model: string) {
61
- let Ollama = ollama(model);
 
 
 
62
  Ollama.config.baseURL = `${baseURL}/api`;
63
  return Ollama;
64
  }
@@ -80,8 +83,16 @@ export function getOpenRouterModel(apiKey: string, model: string) {
80
  return openRouter.chat(model);
81
  }
82
 
83
- export function getModel(provider: string, model: string, env: Env) {
84
- const apiKey = getAPIKey(env, provider);
 
 
 
 
 
 
 
 
85
  const baseURL = getBaseURL(env, provider);
86
 
87
  switch (provider) {
@@ -101,6 +112,8 @@ export function getModel(provider: string, model: string, env: Env) {
101
  return getDeepseekModel(apiKey, model)
102
  case 'Mistral':
103
  return getMistralModel(apiKey, model);
 
 
104
  default:
105
  return getOllamaModel(baseURL, model);
106
  }
 
58
  }
59
 
60
  export function getOllamaModel(baseURL: string, model: string) {
61
+ let Ollama = ollama(model, {
62
+ numCtx: 32768,
63
+ });
64
+
65
  Ollama.config.baseURL = `${baseURL}/api`;
66
  return Ollama;
67
  }
 
83
  return openRouter.chat(model);
84
  }
85
 
86
+ export function getXAIModel(apiKey: string, model: string) {
87
+ const openai = createOpenAI({
88
+ baseURL: 'https://api.x.ai/v1',
89
+ apiKey,
90
+ });
91
+
92
+ return openai(model);
93
+ }
94
+ export function getModel(provider: string, model: string, env: Env, apiKeys?: Record<string, string>) {
95
+ const apiKey = getAPIKey(env, provider, apiKeys);
96
  const baseURL = getBaseURL(env, provider);
97
 
98
  switch (provider) {
 
112
  return getDeepseekModel(apiKey, model)
113
  case 'Mistral':
114
  return getMistralModel(apiKey, model);
115
+ case 'xAI':
116
+ return getXAIModel(apiKey, model);
117
  default:
118
  return getOllamaModel(baseURL, model);
119
  }
app/lib/.server/llm/stream-text.ts CHANGED
@@ -4,7 +4,7 @@ import { streamText as _streamText, convertToCoreMessages } from 'ai';
4
  import { getModel } from '~/lib/.server/llm/model';
5
  import { MAX_TOKENS } from './constants';
6
  import { getSystemPrompt } from './prompts';
7
- import { MODEL_LIST, DEFAULT_MODEL, DEFAULT_PROVIDER } from '~/utils/constants';
8
 
9
  interface ToolResult<Name extends string, Args, Result> {
10
  toolCallId: string;
@@ -24,42 +24,53 @@ export type Messages = Message[];
24
 
25
  export type StreamingOptions = Omit<Parameters<typeof _streamText>[0], 'model'>;
26
 
27
- function extractModelFromMessage(message: Message): { model: string; content: string } {
28
- const modelRegex = /^\[Model: (.*?)\]\n\n/;
29
- const match = message.content.match(modelRegex);
 
30
 
31
- if (match) {
32
- const model = match[1];
33
- const content = message.content.replace(modelRegex, '');
34
- return { model, content };
35
- }
36
 
37
- // Default model if not specified
38
- return { model: DEFAULT_MODEL, content: message.content };
 
 
 
 
 
39
  }
40
 
41
- export function streamText(messages: Messages, env: Env, options?: StreamingOptions) {
 
 
 
 
 
42
  let currentModel = DEFAULT_MODEL;
 
 
43
  const processedMessages = messages.map((message) => {
44
  if (message.role === 'user') {
45
- const { model, content } = extractModelFromMessage(message);
46
- if (model && MODEL_LIST.find((m) => m.name === model)) {
47
- currentModel = model; // Update the current model
 
48
  }
 
 
 
49
  return { ...message, content };
50
  }
51
- return message;
52
- });
53
 
54
- const provider = MODEL_LIST.find((model) => model.name === currentModel)?.provider || DEFAULT_PROVIDER;
 
55
 
56
  return _streamText({
57
- model: getModel(provider, currentModel, env),
58
  system: getSystemPrompt(),
59
  maxTokens: MAX_TOKENS,
60
- // headers: {
61
- // 'anthropic-beta': 'max-tokens-3-5-sonnet-2024-07-15',
62
- // },
63
  messages: convertToCoreMessages(processedMessages),
64
  ...options,
65
  });
 
4
  import { getModel } from '~/lib/.server/llm/model';
5
  import { MAX_TOKENS } from './constants';
6
  import { getSystemPrompt } from './prompts';
7
+ import { MODEL_LIST, DEFAULT_MODEL, DEFAULT_PROVIDER, MODEL_REGEX, PROVIDER_REGEX } from '~/utils/constants';
8
 
9
  interface ToolResult<Name extends string, Args, Result> {
10
  toolCallId: string;
 
24
 
25
  export type StreamingOptions = Omit<Parameters<typeof _streamText>[0], 'model'>;
26
 
27
+ function extractPropertiesFromMessage(message: Message): { model: string; provider: string; content: string } {
28
+ // Extract model
29
+ const modelMatch = message.content.match(MODEL_REGEX);
30
+ const model = modelMatch ? modelMatch[1] : DEFAULT_MODEL;
31
 
32
+ // Extract provider
33
+ const providerMatch = message.content.match(PROVIDER_REGEX);
34
+ const provider = providerMatch ? providerMatch[1] : DEFAULT_PROVIDER;
 
 
35
 
36
+ // Remove model and provider lines from content
37
+ const cleanedContent = message.content
38
+ .replace(MODEL_REGEX, '')
39
+ .replace(PROVIDER_REGEX, '')
40
+ .trim();
41
+
42
+ return { model, provider, content: cleanedContent };
43
  }
44
 
45
+ export function streamText(
46
+ messages: Messages,
47
+ env: Env,
48
+ options?: StreamingOptions,
49
+ apiKeys?: Record<string, string>
50
+ ) {
51
  let currentModel = DEFAULT_MODEL;
52
+ let currentProvider = DEFAULT_PROVIDER;
53
+
54
  const processedMessages = messages.map((message) => {
55
  if (message.role === 'user') {
56
+ const { model, provider, content } = extractPropertiesFromMessage(message);
57
+
58
+ if (MODEL_LIST.find((m) => m.name === model)) {
59
+ currentModel = model;
60
  }
61
+
62
+ currentProvider = provider;
63
+
64
  return { ...message, content };
65
  }
 
 
66
 
67
+ return message; // No changes for non-user messages
68
+ });
69
 
70
  return _streamText({
71
+ model: getModel(currentProvider, currentModel, env, apiKeys),
72
  system: getSystemPrompt(),
73
  maxTokens: MAX_TOKENS,
 
 
 
74
  messages: convertToCoreMessages(processedMessages),
75
  ...options,
76
  });
app/routes/api.chat.ts CHANGED
@@ -11,13 +11,17 @@ export async function action(args: ActionFunctionArgs) {
11
  }
12
 
13
  async function chatAction({ context, request }: ActionFunctionArgs) {
14
- const { messages } = await request.json<{ messages: Messages }>();
 
 
 
15
 
16
  const stream = new SwitchableStream();
17
 
18
  try {
19
  const options: StreamingOptions = {
20
  toolChoice: 'none',
 
21
  onFinish: async ({ text: content, finishReason }) => {
22
  if (finishReason !== 'length') {
23
  return stream.close();
@@ -40,7 +44,7 @@ async function chatAction({ context, request }: ActionFunctionArgs) {
40
  },
41
  };
42
 
43
- const result = await streamText(messages, context.cloudflare.env, options);
44
 
45
  stream.switchSource(result.toAIStream());
46
 
@@ -52,6 +56,13 @@ async function chatAction({ context, request }: ActionFunctionArgs) {
52
  });
53
  } catch (error) {
54
  console.log(error);
 
 
 
 
 
 
 
55
 
56
  throw new Response(null, {
57
  status: 500,
 
11
  }
12
 
13
  async function chatAction({ context, request }: ActionFunctionArgs) {
14
+ const { messages, apiKeys } = await request.json<{
15
+ messages: Messages,
16
+ apiKeys: Record<string, string>
17
+ }>();
18
 
19
  const stream = new SwitchableStream();
20
 
21
  try {
22
  const options: StreamingOptions = {
23
  toolChoice: 'none',
24
+ apiKeys,
25
  onFinish: async ({ text: content, finishReason }) => {
26
  if (finishReason !== 'length') {
27
  return stream.close();
 
44
  },
45
  };
46
 
47
+ const result = await streamText(messages, context.cloudflare.env, options, apiKeys);
48
 
49
  stream.switchSource(result.toAIStream());
50
 
 
56
  });
57
  } catch (error) {
58
  console.log(error);
59
+
60
+ if (error.message?.includes('API key')) {
61
+ throw new Response('Invalid or missing API key', {
62
+ status: 401,
63
+ statusText: 'Unauthorized'
64
+ });
65
+ }
66
 
67
  throw new Response(null, {
68
  status: 500,
app/utils/constants.ts CHANGED
@@ -4,6 +4,7 @@ export const WORK_DIR_NAME = 'project';
4
  export const WORK_DIR = `/home/${WORK_DIR_NAME}`;
5
  export const MODIFICATIONS_TAG_NAME = 'bolt_file_modifications';
6
  export const MODEL_REGEX = /^\[Model: (.*?)\]\n\n/;
 
7
  export const DEFAULT_MODEL = 'claude-3-5-sonnet-20240620';
8
  export const DEFAULT_PROVIDER = 'Anthropic';
9
 
@@ -15,11 +16,12 @@ const staticModels: ModelInfo[] = [
15
  { name: 'deepseek/deepseek-coder', label: 'Deepseek-Coder V2 236B (OpenRouter)', provider: 'OpenRouter' },
16
  { name: 'google/gemini-flash-1.5', label: 'Google Gemini Flash 1.5 (OpenRouter)', provider: 'OpenRouter' },
17
  { name: 'google/gemini-pro-1.5', label: 'Google Gemini Pro 1.5 (OpenRouter)', provider: 'OpenRouter' },
 
18
  { name: 'mistralai/mistral-nemo', label: 'OpenRouter Mistral Nemo (OpenRouter)', provider: 'OpenRouter' },
19
  { name: 'qwen/qwen-110b-chat', label: 'OpenRouter Qwen 110b Chat (OpenRouter)', provider: 'OpenRouter' },
20
  { name: 'cohere/command', label: 'Cohere Command (OpenRouter)', provider: 'OpenRouter' },
21
  { name: 'gemini-1.5-flash-latest', label: 'Gemini 1.5 Flash', provider: 'Google' },
22
- { name: 'gemini-1.5-pro-latest', label: 'Gemini 1.5 Pro', provider: 'Google'},
23
  { name: 'llama-3.1-70b-versatile', label: 'Llama 3.1 70b (Groq)', provider: 'Groq' },
24
  { name: 'llama-3.1-8b-instant', label: 'Llama 3.1 8b (Groq)', provider: 'Groq' },
25
  { name: 'llama-3.2-11b-vision-preview', label: 'Llama 3.2 11b (Groq)', provider: 'Groq' },
@@ -32,6 +34,7 @@ const staticModels: ModelInfo[] = [
32
  { name: 'gpt-4-turbo', label: 'GPT-4 Turbo', provider: 'OpenAI' },
33
  { name: 'gpt-4', label: 'GPT-4', provider: 'OpenAI' },
34
  { name: 'gpt-3.5-turbo', label: 'GPT-3.5 Turbo', provider: 'OpenAI' },
 
35
  { name: 'deepseek-coder', label: 'Deepseek-Coder', provider: 'Deepseek'},
36
  { name: 'deepseek-chat', label: 'Deepseek-Chat', provider: 'Deepseek'},
37
  { name: 'open-mistral-7b', label: 'Mistral 7B', provider: 'Mistral' },
@@ -47,9 +50,25 @@ const staticModels: ModelInfo[] = [
47
 
48
  export let MODEL_LIST: ModelInfo[] = [...staticModels];
49
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
  async function getOllamaModels(): Promise<ModelInfo[]> {
51
  try {
52
- const base_url = import.meta.env.OLLAMA_API_BASE_URL || "http://localhost:11434";
53
  const response = await fetch(`${base_url}/api/tags`);
54
  const data = await response.json() as OllamaApiResponse;
55
 
@@ -64,32 +83,32 @@ async function getOllamaModels(): Promise<ModelInfo[]> {
64
  }
65
 
66
  async function getOpenAILikeModels(): Promise<ModelInfo[]> {
67
- try {
68
- const base_url =import.meta.env.OPENAI_LIKE_API_BASE_URL || "";
69
- if (!base_url) {
70
  return [];
71
- }
72
- const api_key = import.meta.env.OPENAI_LIKE_API_KEY ?? "";
73
- const response = await fetch(`${base_url}/models`, {
74
- headers: {
75
- Authorization: `Bearer ${api_key}`,
76
- }
77
- });
78
  const res = await response.json() as any;
79
  return res.data.map((model: any) => ({
80
  name: model.id,
81
  label: model.id,
82
  provider: 'OpenAILike',
83
  }));
84
- }catch (e) {
85
- return []
86
- }
87
 
88
  }
89
  async function initializeModelList(): Promise<void> {
90
  const ollamaModels = await getOllamaModels();
91
  const openAiLikeModels = await getOpenAILikeModels();
92
- MODEL_LIST = [...ollamaModels,...openAiLikeModels, ...staticModels];
93
  }
94
  initializeModelList().then();
95
- export { getOllamaModels, getOpenAILikeModels, initializeModelList };
 
4
  export const WORK_DIR = `/home/${WORK_DIR_NAME}`;
5
  export const MODIFICATIONS_TAG_NAME = 'bolt_file_modifications';
6
  export const MODEL_REGEX = /^\[Model: (.*?)\]\n\n/;
7
+ export const PROVIDER_REGEX = /\[Provider: (.*?)\]\n\n/;
8
  export const DEFAULT_MODEL = 'claude-3-5-sonnet-20240620';
9
  export const DEFAULT_PROVIDER = 'Anthropic';
10
 
 
16
  { name: 'deepseek/deepseek-coder', label: 'Deepseek-Coder V2 236B (OpenRouter)', provider: 'OpenRouter' },
17
  { name: 'google/gemini-flash-1.5', label: 'Google Gemini Flash 1.5 (OpenRouter)', provider: 'OpenRouter' },
18
  { name: 'google/gemini-pro-1.5', label: 'Google Gemini Pro 1.5 (OpenRouter)', provider: 'OpenRouter' },
19
+ { name: 'x-ai/grok-beta', label: "xAI Grok Beta (OpenRouter)", provider: 'OpenRouter' },
20
  { name: 'mistralai/mistral-nemo', label: 'OpenRouter Mistral Nemo (OpenRouter)', provider: 'OpenRouter' },
21
  { name: 'qwen/qwen-110b-chat', label: 'OpenRouter Qwen 110b Chat (OpenRouter)', provider: 'OpenRouter' },
22
  { name: 'cohere/command', label: 'Cohere Command (OpenRouter)', provider: 'OpenRouter' },
23
  { name: 'gemini-1.5-flash-latest', label: 'Gemini 1.5 Flash', provider: 'Google' },
24
+ { name: 'gemini-1.5-pro-latest', label: 'Gemini 1.5 Pro', provider: 'Google' },
25
  { name: 'llama-3.1-70b-versatile', label: 'Llama 3.1 70b (Groq)', provider: 'Groq' },
26
  { name: 'llama-3.1-8b-instant', label: 'Llama 3.1 8b (Groq)', provider: 'Groq' },
27
  { name: 'llama-3.2-11b-vision-preview', label: 'Llama 3.2 11b (Groq)', provider: 'Groq' },
 
34
  { name: 'gpt-4-turbo', label: 'GPT-4 Turbo', provider: 'OpenAI' },
35
  { name: 'gpt-4', label: 'GPT-4', provider: 'OpenAI' },
36
  { name: 'gpt-3.5-turbo', label: 'GPT-3.5 Turbo', provider: 'OpenAI' },
37
+ { name: 'grok-beta', label: "xAI Grok Beta", provider: 'xAI' },
38
  { name: 'deepseek-coder', label: 'Deepseek-Coder', provider: 'Deepseek'},
39
  { name: 'deepseek-chat', label: 'Deepseek-Chat', provider: 'Deepseek'},
40
  { name: 'open-mistral-7b', label: 'Mistral 7B', provider: 'Mistral' },
 
50
 
51
  export let MODEL_LIST: ModelInfo[] = [...staticModels];
52
 
53
+ const getOllamaBaseUrl = () => {
54
+ const defaultBaseUrl = import.meta.env.OLLAMA_API_BASE_URL || 'http://localhost:11434';
55
+ // Check if we're in the browser
56
+ if (typeof window !== 'undefined') {
57
+ // Frontend always uses localhost
58
+ return defaultBaseUrl;
59
+ }
60
+
61
+ // Backend: Check if we're running in Docker
62
+ const isDocker = process.env.RUNNING_IN_DOCKER === 'true';
63
+
64
+ return isDocker
65
+ ? defaultBaseUrl.replace("localhost", "host.docker.internal")
66
+ : defaultBaseUrl;
67
+ };
68
+
69
  async function getOllamaModels(): Promise<ModelInfo[]> {
70
  try {
71
+ const base_url = getOllamaBaseUrl();
72
  const response = await fetch(`${base_url}/api/tags`);
73
  const data = await response.json() as OllamaApiResponse;
74
 
 
83
  }
84
 
85
  async function getOpenAILikeModels(): Promise<ModelInfo[]> {
86
+ try {
87
+ const base_url = import.meta.env.OPENAI_LIKE_API_BASE_URL || "";
88
+ if (!base_url) {
89
  return [];
90
+ }
91
+ const api_key = import.meta.env.OPENAI_LIKE_API_KEY ?? "";
92
+ const response = await fetch(`${base_url}/models`, {
93
+ headers: {
94
+ Authorization: `Bearer ${api_key}`,
95
+ }
96
+ });
97
  const res = await response.json() as any;
98
  return res.data.map((model: any) => ({
99
  name: model.id,
100
  label: model.id,
101
  provider: 'OpenAILike',
102
  }));
103
+ } catch (e) {
104
+ return []
105
+ }
106
 
107
  }
108
  async function initializeModelList(): Promise<void> {
109
  const ollamaModels = await getOllamaModels();
110
  const openAiLikeModels = await getOpenAILikeModels();
111
+ MODEL_LIST = [...ollamaModels, ...openAiLikeModels, ...staticModels];
112
  }
113
  initializeModelList().then();
114
+ export { getOllamaModels, getOpenAILikeModels, initializeModelList };
docker-compose.yaml ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ services:
2
+ bolt-ai:
3
+ image: bolt-ai:production
4
+ build:
5
+ context: .
6
+ dockerfile: Dockerfile
7
+ target: bolt-ai-production
8
+ ports:
9
+ - "5173:5173"
10
+ env_file: ".env.local"
11
+ environment:
12
+ - NODE_ENV=production
13
+ - COMPOSE_PROFILES=production
14
+ # No strictly neded but serving as hints for Coolify
15
+ - PORT=5173
16
+ - GROQ_API_KEY=${GROQ_API_KEY}
17
+ - OPENAI_API_KEY=${OPENAI_API_KEY}
18
+ - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
19
+ - OPEN_ROUTER_API_KEY=${OPEN_ROUTER_API_KEY}
20
+ - GOOGLE_GENERATIVE_AI_API_KEY=${GOOGLE_GENERATIVE_AI_API_KEY}
21
+ - OLLAMA_API_BASE_URL=${OLLAMA_API_BASE_URL}
22
+ - VITE_LOG_LEVEL=${VITE_LOG_LEVEL:-debug}
23
+ - RUNNING_IN_DOCKER=true
24
+ extra_hosts:
25
+ - "host.docker.internal:host-gateway"
26
+ command: pnpm run dockerstart
27
+ profiles:
28
+ - production # This service only runs in the production profile
29
+
30
+ bolt-ai-dev:
31
+ image: bolt-ai:development
32
+ build:
33
+ target: bolt-ai-development
34
+ environment:
35
+ - NODE_ENV=development
36
+ - VITE_HMR_PROTOCOL=ws
37
+ - VITE_HMR_HOST=localhost
38
+ - VITE_HMR_PORT=5173
39
+ - CHOKIDAR_USEPOLLING=true
40
+ - WATCHPACK_POLLING=true
41
+ - PORT=5173
42
+ - GROQ_API_KEY=${GROQ_API_KEY}
43
+ - OPENAI_API_KEY=${OPENAI_API_KEY}
44
+ - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
45
+ - OPEN_ROUTER_API_KEY=${OPEN_ROUTER_API_KEY}
46
+ - GOOGLE_GENERATIVE_AI_API_KEY=${GOOGLE_GENERATIVE_AI_API_KEY}
47
+ - OLLAMA_API_BASE_URL=${OLLAMA_API_BASE_URL}
48
+ - VITE_LOG_LEVEL=${VITE_LOG_LEVEL:-debug}
49
+ - RUNNING_IN_DOCKER=true
50
+ extra_hosts:
51
+ - "host.docker.internal:host-gateway"
52
+ volumes:
53
+ - type: bind
54
+ source: .
55
+ target: /app
56
+ consistency: cached
57
+ - /app/node_modules
58
+ ports:
59
+ - "5173:5173" # Same port, no conflict as only one runs at a time
60
+ command: pnpm run dev --host 0.0.0.0
61
+ profiles: ["development", "default"] # Make development the default profile
docker-compose.yml DELETED
@@ -1,24 +0,0 @@
1
- services:
2
- bolt-app:
3
- build:
4
- context: .
5
- dockerfile: Dockerfile
6
- ports:
7
- - "3000:3000"
8
- environment:
9
- - NODE_ENV=production
10
- # Add any other environment variables your app needs
11
- # - OPENAI_API_KEY=${OPENAI_API_KEY}
12
- # - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
13
- # - GROQ_API_KEY=${GROQ_API_KEY}
14
- # - OPEN_ROUTER_API_KEY=${OPEN_ROUTER_API_KEY}
15
- volumes:
16
- # This volume is for development purposes, allowing live code updates
17
- # Comment out or remove for production
18
- - .:/app
19
- # This volume is to prevent node_modules from being overwritten by the above volume
20
- - /app/node_modules
21
- command: pnpm run start
22
-
23
- volumes:
24
- node_modules:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
package.json CHANGED
@@ -13,7 +13,11 @@
13
  "test:watch": "vitest",
14
  "lint": "eslint --cache --cache-location ./node_modules/.cache/eslint .",
15
  "lint:fix": "npm run lint -- --fix",
16
- "start": "bindings=$(./bindings.sh) && wrangler pages dev ./build/client $bindings --ip 0.0.0.0 --port 3000",
 
 
 
 
17
  "typecheck": "tsc",
18
  "typegen": "wrangler types",
19
  "preview": "pnpm run build && pnpm run start"
@@ -24,8 +28,8 @@
24
  "dependencies": {
25
  "@ai-sdk/anthropic": "^0.0.39",
26
  "@ai-sdk/google": "^0.0.52",
27
- "@ai-sdk/openai": "^0.0.66",
28
  "@ai-sdk/mistral": "^0.0.43",
 
29
  "@codemirror/autocomplete": "^6.17.0",
30
  "@codemirror/commands": "^6.6.0",
31
  "@codemirror/lang-cpp": "^6.0.2",
@@ -67,6 +71,7 @@
67
  "isbot": "^4.1.0",
68
  "istextorbinary": "^9.5.0",
69
  "jose": "^5.6.3",
 
70
  "jszip": "^3.10.1",
71
  "nanostores": "^0.10.3",
72
  "ollama-ai-provider": "^0.15.2",
@@ -90,6 +95,7 @@
90
  "@remix-run/dev": "^2.10.0",
91
  "@types/diff": "^5.2.1",
92
  "@types/file-saver": "^2.0.7",
 
93
  "@types/react": "^18.2.20",
94
  "@types/react-dom": "^18.2.7",
95
  "fast-glob": "^3.3.2",
@@ -110,5 +116,6 @@
110
  },
111
  "resolutions": {
112
  "@typescript-eslint/utils": "^8.0.0-alpha.30"
113
- }
 
114
  }
 
13
  "test:watch": "vitest",
14
  "lint": "eslint --cache --cache-location ./node_modules/.cache/eslint .",
15
  "lint:fix": "npm run lint -- --fix",
16
+ "start": "bindings=$(./bindings.sh) && wrangler pages dev ./build/client $bindings",
17
+ "dockerstart": "bindings=$(./bindings.sh) && wrangler pages dev ./build/client $bindings --ip 0.0.0.0 --port 5173 --no-show-interactive-dev-session",
18
+ "dockerrun": "docker run -it -d --name bolt-ai-live -p 5173:5173 --env-file .env.local bolt-ai",
19
+ "dockerbuild:prod": "docker build -t bolt-ai:production bolt-ai:latest --target bolt-ai-production .",
20
+ "dockerbuild": "docker build -t bolt-ai:development -t bolt-ai:latest --target bolt-ai-development .",
21
  "typecheck": "tsc",
22
  "typegen": "wrangler types",
23
  "preview": "pnpm run build && pnpm run start"
 
28
  "dependencies": {
29
  "@ai-sdk/anthropic": "^0.0.39",
30
  "@ai-sdk/google": "^0.0.52",
 
31
  "@ai-sdk/mistral": "^0.0.43",
32
+ "@ai-sdk/openai": "^0.0.66",
33
  "@codemirror/autocomplete": "^6.17.0",
34
  "@codemirror/commands": "^6.6.0",
35
  "@codemirror/lang-cpp": "^6.0.2",
 
71
  "isbot": "^4.1.0",
72
  "istextorbinary": "^9.5.0",
73
  "jose": "^5.6.3",
74
+ "js-cookie": "^3.0.5",
75
  "jszip": "^3.10.1",
76
  "nanostores": "^0.10.3",
77
  "ollama-ai-provider": "^0.15.2",
 
95
  "@remix-run/dev": "^2.10.0",
96
  "@types/diff": "^5.2.1",
97
  "@types/file-saver": "^2.0.7",
98
+ "@types/js-cookie": "^3.0.6",
99
  "@types/react": "^18.2.20",
100
  "@types/react-dom": "^18.2.7",
101
  "fast-glob": "^3.3.2",
 
116
  },
117
  "resolutions": {
118
  "@typescript-eslint/utils": "^8.0.0-alpha.30"
119
+ },
120
+ "packageManager": "[email protected]+sha512.22721b3a11f81661ae1ec68ce1a7b879425a1ca5b991c975b074ac220b187ce56c708fe5db69f4c962c989452eee76c82877f4ee80f474cebd61ee13461b6228"
121
  }
pnpm-lock.yaml CHANGED
@@ -146,6 +146,9 @@ importers:
146
  jose:
147
  specifier: ^5.6.3
148
  version: 5.6.3
 
 
 
149
  jszip:
150
  specifier: ^3.10.1
151
  version: 3.10.1
@@ -210,6 +213,9 @@ importers:
210
  '@types/file-saver':
211
  specifier: ^2.0.7
212
  version: 2.0.7
 
 
 
213
  '@types/react':
214
  specifier: ^18.2.20
215
  version: 18.3.3
@@ -1872,6 +1878,9 @@ packages:
1872
  '@types/[email protected]':
1873
  resolution: {integrity: sha512-WPs+bbQw5aCj+x6laNGWLH3wviHtoCv/P3+otBhbOhJgG8qtpdAMlTCxLtsTWA7LH1Oh/bFCHsBn0TPS5m30EQ==}
1874
 
 
 
 
1875
  '@types/[email protected]':
1876
  resolution: {integrity: sha512-5+fP8P8MFNC+AyZCDxrB2pkZFPGzqQWUzpSeuuVLvm8VMcorNYavBqoFcxK8bQz4Qsbn4oUEEem4wDLfcysGHA==}
1877
 
@@ -3455,6 +3464,10 @@ packages:
3455
3456
  resolution: {integrity: sha512-1Jh//hEEwMhNYPDDLwXHa2ePWgWiFNNUadVmguAAw2IJ6sj9mNxV5tGXJNqlMkJAybF6Lgw1mISDxTePP/187g==}
3457
 
 
 
 
 
3458
3459
  resolution: {integrity: sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ==}
3460
 
@@ -7248,6 +7261,8 @@ snapshots:
7248
  dependencies:
7249
  '@types/unist': 3.0.2
7250
 
 
 
7251
  '@types/[email protected]': {}
7252
 
7253
  '@types/[email protected]':
@@ -9211,6 +9226,8 @@ snapshots:
9211
 
9212
9213
 
 
 
9214
9215
 
9216
 
146
  jose:
147
  specifier: ^5.6.3
148
  version: 5.6.3
149
+ js-cookie:
150
+ specifier: ^3.0.5
151
+ version: 3.0.5
152
  jszip:
153
  specifier: ^3.10.1
154
  version: 3.10.1
 
213
  '@types/file-saver':
214
  specifier: ^2.0.7
215
  version: 2.0.7
216
+ '@types/js-cookie':
217
+ specifier: ^3.0.6
218
+ version: 3.0.6
219
  '@types/react':
220
  specifier: ^18.2.20
221
  version: 18.3.3
 
1878
  '@types/[email protected]':
1879
  resolution: {integrity: sha512-WPs+bbQw5aCj+x6laNGWLH3wviHtoCv/P3+otBhbOhJgG8qtpdAMlTCxLtsTWA7LH1Oh/bFCHsBn0TPS5m30EQ==}
1880
 
1881
+ '@types/[email protected]':
1882
+ resolution: {integrity: sha512-wkw9yd1kEXOPnvEeEV1Go1MmxtBJL0RR79aOTAApecWFVu7w0NNXNqhcWgvw2YgZDYadliXkl14pa3WXw5jlCQ==}
1883
+
1884
  '@types/[email protected]':
1885
  resolution: {integrity: sha512-5+fP8P8MFNC+AyZCDxrB2pkZFPGzqQWUzpSeuuVLvm8VMcorNYavBqoFcxK8bQz4Qsbn4oUEEem4wDLfcysGHA==}
1886
 
 
3464
3465
  resolution: {integrity: sha512-1Jh//hEEwMhNYPDDLwXHa2ePWgWiFNNUadVmguAAw2IJ6sj9mNxV5tGXJNqlMkJAybF6Lgw1mISDxTePP/187g==}
3466
 
3467
3468
+ resolution: {integrity: sha512-cEiJEAEoIbWfCZYKWhVwFuvPX1gETRYPw6LlaTKoxD3s2AkXzkCjnp6h0V77ozyqj0jakteJ4YqDJT830+lVGw==}
3469
+ engines: {node: '>=14'}
3470
+
3471
3472
  resolution: {integrity: sha512-RdJUflcE3cUzKiMqQgsCu06FPu9UdIJO0beYbPhHN4k6apgJtifcoCtT9bcxOpYBtpD2kCM6Sbzg4CausW/PKQ==}
3473
 
 
7261
  dependencies:
7262
  '@types/unist': 3.0.2
7263
 
7264
+ '@types/[email protected]': {}
7265
+
7266
  '@types/[email protected]': {}
7267
 
7268
  '@types/[email protected]':
 
9226
 
9227
9228
 
9229
9230
+
9231
9232
 
9233
wrangler.toml CHANGED
@@ -3,3 +3,4 @@ name = "bolt"
3
  compatibility_flags = ["nodejs_compat"]
4
  compatibility_date = "2024-07-01"
5
  pages_build_output_dir = "./build/client"
 
 
3
  compatibility_flags = ["nodejs_compat"]
4
  compatibility_date = "2024-07-01"
5
  pages_build_output_dir = "./build/client"
6
+ send_metrics = false