Ali commited on
Commit
52cd1ae
·
unverified ·
2 Parent(s): 73a07c9 1ba0606

Merge branch 'main' into new_bolt1

Browse files
.dockerignore ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Ignore Git and GitHub files
2
+ .git
3
+ .github/
4
+
5
+ # Ignore Husky configuration files
6
+ .husky/
7
+
8
+ # Ignore documentation and metadata files
9
+ CONTRIBUTING.md
10
+ LICENSE
11
+ README.md
12
+
13
+ # Ignore environment examples and sensitive info
14
+ .env
15
+ *.local
16
+ *.example
17
+
18
+ # Ignore node modules, logs and cache files
19
+ **/*.log
20
+ **/node_modules
21
+ **/dist
22
+ **/build
23
+ **/.cache
24
+ logs
25
+ dist-ssr
26
+ .DS_Store
.env.example CHANGED
@@ -1,4 +1,4 @@
1
- # Rename this file to .env.local once you have filled in the below environment variables!
2
 
3
  # Get your GROQ API Key here -
4
  # https://console.groq.com/keys
@@ -32,6 +32,9 @@ OLLAMA_API_BASE_URL=
32
  # You only need this environment variable set if you want to use OpenAI Like models
33
  OPENAI_LIKE_API_BASE_URL=
34
 
 
 
 
35
  # Get your OpenAI Like API Key
36
  OPENAI_LIKE_API_KEY=
37
 
@@ -40,5 +43,10 @@ OPENAI_LIKE_API_KEY=
40
  # You only need this environment variable set if you want to use Mistral models
41
  MISTRAL_API_KEY=
42
 
 
 
 
 
 
43
  # Include this environment variable if you want more logging for debugging locally
44
  VITE_LOG_LEVEL=debug
 
1
+ # Rename this file to .env once you have filled in the below environment variables!
2
 
3
  # Get your GROQ API Key here -
4
  # https://console.groq.com/keys
 
32
  # You only need this environment variable set if you want to use OpenAI Like models
33
  OPENAI_LIKE_API_BASE_URL=
34
 
35
+ # You only need this environment variable set if you want to use DeepSeek models through their API
36
+ DEEPSEEK_API_KEY=
37
+
38
  # Get your OpenAI Like API Key
39
  OPENAI_LIKE_API_KEY=
40
 
 
43
  # You only need this environment variable set if you want to use Mistral models
44
  MISTRAL_API_KEY=
45
 
46
+ # Get your xAI API key
47
+ # https://x.ai/api
48
+ # You only need this environment variable set if you want to use xAI models
49
+ XAI_API_KEY=
50
+
51
  # Include this environment variable if you want more logging for debugging locally
52
  VITE_LOG_LEVEL=debug
.gitignore CHANGED
@@ -29,3 +29,5 @@ dist-ssr
29
  *.vars
30
  .wrangler
31
  _worker.bundle
 
 
 
29
  *.vars
30
  .wrangler
31
  _worker.bundle
32
+
33
+ Modelfile
CONTRIBUTING.md CHANGED
@@ -8,6 +8,7 @@ First off, thank you for considering contributing to Bolt.new! This fork aims to
8
  - [Pull Request Guidelines](#pull-request-guidelines)
9
  - [Coding Standards](#coding-standards)
10
  - [Development Setup](#development-setup)
 
11
  - [Project Structure](#project-structure)
12
 
13
  ## Code of Conduct
@@ -88,11 +89,113 @@ pnpm run dev
88
 
89
  **Note**: You will need Google Chrome Canary to run this locally if you use Chrome! It's an easy install and a good browser for web development anyway.
90
 
91
- ## Questions?
92
 
93
- For any questions about contributing, please:
94
- 1. Check existing documentation
95
- 2. Search through issues
96
- 3. Create a new issue with the question label
97
 
98
- Thank you for contributing to Bolt.new! 🚀
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  - [Pull Request Guidelines](#pull-request-guidelines)
9
  - [Coding Standards](#coding-standards)
10
  - [Development Setup](#development-setup)
11
+ - [Deploymnt with Docker](#docker-deployment-documentation)
12
  - [Project Structure](#project-structure)
13
 
14
  ## Code of Conduct
 
89
 
90
  **Note**: You will need Google Chrome Canary to run this locally if you use Chrome! It's an easy install and a good browser for web development anyway.
91
 
92
+ ## Testing
93
 
94
+ Run the test suite with:
 
 
 
95
 
96
+ ```bash
97
+ pnpm test
98
+ ```
99
+
100
+ ## Deployment
101
+
102
+ To deploy the application to Cloudflare Pages:
103
+
104
+ ```bash
105
+ pnpm run deploy
106
+ ```
107
+
108
+ Make sure you have the necessary permissions and Wrangler is correctly configured for your Cloudflare account.
109
+
110
+ # Docker Deployment Documentation
111
+
112
+ This guide outlines various methods for building and deploying the application using Docker.
113
+
114
+ ## Build Methods
115
+
116
+ ### 1. Using Helper Scripts
117
+
118
+ NPM scripts are provided for convenient building:
119
+
120
+ ```bash
121
+ # Development build
122
+ npm run dockerbuild
123
+
124
+ # Production build
125
+ npm run dockerbuild:prod
126
+ ```
127
+
128
+ ### 2. Direct Docker Build Commands
129
+
130
+ You can use Docker's target feature to specify the build environment:
131
+
132
+ ```bash
133
+ # Development build
134
+ docker build . --target bolt-ai-development
135
+
136
+ # Production build
137
+ docker build . --target bolt-ai-production
138
+ ```
139
+
140
+ ### 3. Docker Compose with Profiles
141
+
142
+ Use Docker Compose profiles to manage different environments:
143
+
144
+ ```bash
145
+ # Development environment
146
+ docker-compose --profile development up
147
+
148
+ # Production environment
149
+ docker-compose --profile production up
150
+ ```
151
+
152
+ ## Running the Application
153
+
154
+ After building using any of the methods above, run the container with:
155
+
156
+ ```bash
157
+ # Development
158
+ docker run -p 5173:5173 --env-file .env.local bolt-ai:development
159
+
160
+ # Production
161
+ docker run -p 5173:5173 --env-file .env.local bolt-ai:production
162
+ ```
163
+
164
+ ## Deployment with Coolify
165
+
166
+ [Coolify](https://github.com/coollabsio/coolify) provides a straightforward deployment process:
167
+
168
+ 1. Import your Git repository as a new project
169
+ 2. Select your target environment (development/production)
170
+ 3. Choose "Docker Compose" as the Build Pack
171
+ 4. Configure deployment domains
172
+ 5. Set the custom start command:
173
+ ```bash
174
+ docker compose --profile production up
175
+ ```
176
+ 6. Configure environment variables
177
+ - Add necessary AI API keys
178
+ - Adjust other environment variables as needed
179
+ 7. Deploy the application
180
+
181
+ ## VS Code Integration
182
+
183
+ The `docker-compose.yaml` configuration is compatible with VS Code dev containers:
184
+
185
+ 1. Open the command palette in VS Code
186
+ 2. Select the dev container configuration
187
+ 3. Choose the "development" profile from the context menu
188
+
189
+ ## Environment Files
190
+
191
+ Ensure you have the appropriate `.env.local` file configured before running the containers. This file should contain:
192
+ - API keys
193
+ - Environment-specific configurations
194
+ - Other required environment variables
195
+
196
+ ## Notes
197
+
198
+ - Port 5173 is exposed and mapped for both development and production environments
199
+ - Environment variables are loaded from `.env.local`
200
+ - Different profiles (development/production) can be used for different deployment scenarios
201
+ - The configuration supports both local development and production deployment
Dockerfile CHANGED
@@ -1,29 +1,67 @@
1
- # Use an official Node.js runtime as the base image
2
- FROM node:20.15.1
3
 
4
- # Set the working directory in the container
5
  WORKDIR /app
6
 
7
- # Install pnpm
8
- RUN npm install -g [email protected]
9
 
10
- # Copy package.json and pnpm-lock.yaml (if available)
11
- COPY package.json pnpm-lock.yaml* ./
12
 
13
- # Install dependencies
14
- RUN pnpm install
15
-
16
- # Copy the rest of the application code
17
  COPY . .
18
 
19
- # Build the application
20
- RUN pnpm run build
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
- # Make sure bindings.sh is executable
23
- RUN chmod +x bindings.sh
 
 
 
 
 
 
24
 
25
- # Expose the port the app runs on (adjust if you specified a different port)
26
- EXPOSE 3000
 
 
 
 
 
27
 
28
- # Start the application
29
- CMD ["pnpm", "run", "start"]
 
1
+ ARG BASE=node:20.18.0
2
+ FROM ${BASE} AS base
3
 
 
4
  WORKDIR /app
5
 
6
+ # Install dependencies (this step is cached as long as the dependencies don't change)
7
+ COPY package.json pnpm-lock.yaml ./
8
 
9
+ RUN corepack enable pnpm && pnpm install
 
10
 
11
+ # Copy the rest of your app's source code
 
 
 
12
  COPY . .
13
 
14
+ # Expose the port the app runs on
15
+ EXPOSE 5173
16
+
17
+ # Production image
18
+ FROM base AS bolt-ai-production
19
+
20
+ # Define environment variables with default values or let them be overridden
21
+ ARG GROQ_API_KEY
22
+ ARG OPENAI_API_KEY
23
+ ARG ANTHROPIC_API_KEY
24
+ ARG OPEN_ROUTER_API_KEY
25
+ ARG GOOGLE_GENERATIVE_AI_API_KEY
26
+ ARG OLLAMA_API_BASE_URL
27
+ ARG VITE_LOG_LEVEL=debug
28
+
29
+ ENV WRANGLER_SEND_METRICS=false \
30
+ GROQ_API_KEY=${GROQ_API_KEY} \
31
+ OPENAI_API_KEY=${OPENAI_API_KEY} \
32
+ ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY} \
33
+ OPEN_ROUTER_API_KEY=${OPEN_ROUTER_API_KEY} \
34
+ GOOGLE_GENERATIVE_AI_API_KEY=${GOOGLE_GENERATIVE_AI_API_KEY} \
35
+ OLLAMA_API_BASE_URL=${OLLAMA_API_BASE_URL} \
36
+ VITE_LOG_LEVEL=${VITE_LOG_LEVEL}
37
+
38
+ # Pre-configure wrangler to disable metrics
39
+ RUN mkdir -p /root/.config/.wrangler && \
40
+ echo '{"enabled":false}' > /root/.config/.wrangler/metrics.json
41
+
42
+ RUN npm run build
43
+
44
+ CMD [ "pnpm", "run", "dockerstart"]
45
+
46
+ # Development image
47
+ FROM base AS bolt-ai-development
48
 
49
+ # Define the same environment variables for development
50
+ ARG GROQ_API_KEY
51
+ ARG OPENAI_API_KEY
52
+ ARG ANTHROPIC_API_KEY
53
+ ARG OPEN_ROUTER_API_KEY
54
+ ARG GOOGLE_GENERATIVE_AI_API_KEY
55
+ ARG OLLAMA_API_BASE_URL
56
+ ARG VITE_LOG_LEVEL=debug
57
 
58
+ ENV GROQ_API_KEY=${GROQ_API_KEY} \
59
+ OPENAI_API_KEY=${OPENAI_API_KEY} \
60
+ ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY} \
61
+ OPEN_ROUTER_API_KEY=${OPEN_ROUTER_API_KEY} \
62
+ GOOGLE_GENERATIVE_AI_API_KEY=${GOOGLE_GENERATIVE_AI_API_KEY} \
63
+ OLLAMA_API_BASE_URL=${OLLAMA_API_BASE_URL} \
64
+ VITE_LOG_LEVEL=${VITE_LOG_LEVEL}
65
 
66
+ RUN mkdir -p ${WORKDIR}/run
67
+ CMD pnpm run dev --host
README.md CHANGED
@@ -20,6 +20,7 @@ This fork of Bolt.new allows you to choose the LLM that you use for each prompt!
20
  - ✅ Publish projects directly to GitHub (@goncaloalves)
21
  - ⬜ Prevent Bolt from rewriting files as often (Done but need to review PR still)
22
  - ⬜ **HIGH PRIORITY** - Better prompting for smaller LLMs (code window sometimes doesn't start)
 
23
  - ⬜ **HIGH PRIORITY** - Attach images to prompts
24
  - ⬜ **HIGH PRIORITY** - Run agents in the backend as opposed to a single model call
25
  - ⬜ LM Studio Integration
@@ -27,12 +28,17 @@ This fork of Bolt.new allows you to choose the LLM that you use for each prompt!
27
  - ⬜ Azure Open AI API Integration
28
  - ⬜ HuggingFace Integration
29
  - ⬜ Perplexity Integration
 
 
30
  - ⬜ Deploy directly to Vercel/Netlify/other similar platforms
31
- - ⬜ Load local projects into the app
32
  - ⬜ Ability to revert code to earlier version
33
  - ⬜ Prompt caching
 
34
  - ⬜ Ability to enter API keys in the UI
35
  - ⬜ Have LLM plan the project in a MD file for better results/transparency
 
 
 
36
 
37
  # Bolt.new: AI-Powered Full-Stack Web Development in the Browser
38
 
@@ -55,28 +61,47 @@ Whether you’re an experienced developer, a PM, or a designer, Bolt.new allows
55
 
56
  For developers interested in building their own AI-powered development tools with WebContainers, check out the open-source Bolt codebase in this repo!
57
 
58
- ## Prerequisites
59
 
60
- Before you begin, ensure you have the following installed:
61
 
62
- - Node.js (v20.15.1)
63
- - pnpm (v9.4.0)
64
 
65
- ## Setup
66
 
67
- 1. Clone the repository (if you haven't already):
68
 
69
- ```bash
 
 
 
 
 
 
 
 
 
 
70
  git clone https://github.com/coleam00/bolt.new-any-llm.git
71
  ```
72
 
73
- 2. Install dependencies:
74
 
75
- ```bash
76
- pnpm install
 
 
 
 
77
  ```
78
 
79
- 3. Rename `.env.example` to .env.local and add your LLM API keys (you only have to set the ones you want to use and Ollama doesn't need an API key because it runs locally on your computer):
 
 
 
 
 
 
80
 
81
  ```
82
  GROQ_API_KEY=XXX
@@ -90,7 +115,98 @@ Optionally, you can set the debug level:
90
  VITE_LOG_LEVEL=debug
91
  ```
92
 
93
- **Important**: Never commit your `.env.local` file to version control. It's already included in .gitignore.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
94
 
95
  ## Adding New LLMs:
96
 
 
20
  - ✅ Publish projects directly to GitHub (@goncaloalves)
21
  - ⬜ Prevent Bolt from rewriting files as often (Done but need to review PR still)
22
  - ⬜ **HIGH PRIORITY** - Better prompting for smaller LLMs (code window sometimes doesn't start)
23
+ - ⬜ **HIGH PRIORITY** Load local projects into the app
24
  - ⬜ **HIGH PRIORITY** - Attach images to prompts
25
  - ⬜ **HIGH PRIORITY** - Run agents in the backend as opposed to a single model call
26
  - ⬜ LM Studio Integration
 
28
  - ⬜ Azure Open AI API Integration
29
  - ⬜ HuggingFace Integration
30
  - ⬜ Perplexity Integration
31
+ - ⬜ Vertex AI Integration
32
+ - ⬜ Cohere Integration
33
  - ⬜ Deploy directly to Vercel/Netlify/other similar platforms
 
34
  - ⬜ Ability to revert code to earlier version
35
  - ⬜ Prompt caching
36
+ - ⬜ Better prompt enhancing
37
  - ⬜ Ability to enter API keys in the UI
38
  - ⬜ Have LLM plan the project in a MD file for better results/transparency
39
+ - ⬜ VSCode Integration with git-like confirmations
40
+ - ⬜ Upload documents for knowledge - UI design templates, a code base to reference coding style, etc.
41
+ - ⬜ Voice prompting
42
 
43
  # Bolt.new: AI-Powered Full-Stack Web Development in the Browser
44
 
 
61
 
62
  For developers interested in building their own AI-powered development tools with WebContainers, check out the open-source Bolt codebase in this repo!
63
 
64
+ ## Setup
65
 
66
+ Many of you are new users to installing software from Github. If you have any installation troubles reach out and submit an "issue" using the links above, or feel free to enhance this documentation by forking, editing the instructions, and doing a pull request.
67
 
68
+ 1. Install Git from https://git-scm.com/downloads
 
69
 
70
+ 2. Install Node.js from https://nodejs.org/en/download/
71
 
72
+ Pay attention to the installer notes after completion.
73
 
74
+ On all operating systems, the path to Node.js should automatically be added to your system path. But you can check your path if you want to be sure. On Windows, you can search for "edit the system environment variables" in your system, select "Environment Variables..." once you are in the system properties, and then check for a path to Node in your "Path" system variable. On a Mac or Linux machine, it will tell you to check if /usr/local/bin is in your $PATH. To determine if usr/local/bin is included in $PATH open your Terminal and run:
75
+
76
+ ```
77
+ echo $PATH .
78
+ ```
79
+
80
+ If you see usr/local/bin in the output then you're good to go.
81
+
82
+ 3. Clone the repository (if you haven't already) by opening a Terminal window (or CMD with admin permissions) and then typing in this:
83
+
84
+ ```
85
  git clone https://github.com/coleam00/bolt.new-any-llm.git
86
  ```
87
 
88
+ 3. Rename .env.example to .env and add your LLM API keys. You will find this file on a Mac at "[your name]/bold.new-any-llm/.env.example". For Windows and Linux the path will be similar.
89
 
90
+ ![image](https://github.com/user-attachments/assets/7e6a532c-2268-401f-8310-e8d20c731328)
91
+
92
+ If you can't see the file indicated above, its likely you can't view hidden files. On Mac, open a Terminal window and enter this command below. On Windows, you will see the hidden files option in File Explorer Settings. A quick Google search will help you if you are stuck here.
93
+
94
+ ```
95
+ defaults write com.apple.finder AppleShowAllFiles YES
96
  ```
97
 
98
+ **NOTE**: you only have to set the ones you want to use and Ollama doesn't need an API key because it runs locally on your computer:
99
+
100
+ Get your GROQ API Key here: https://console.groq.com/keys
101
+
102
+ Get your Open AI API Key by following these instructions: https://help.openai.com/en/articles/4936850-where-do-i-find-my-openai-api-key
103
+
104
+ Get your Anthropic API Key in your account settings: https://console.anthropic.com/settings/keys
105
 
106
  ```
107
  GROQ_API_KEY=XXX
 
115
  VITE_LOG_LEVEL=debug
116
  ```
117
 
118
+ **Important**: Never commit your `.env` file to version control. It's already included in .gitignore.
119
+
120
+ ## Run with Docker
121
+
122
+ Prerequisites:
123
+
124
+ Git and Node.js as mentioned above, as well as Docker: https://www.docker.com/
125
+
126
+ ### 1a. Using Helper Scripts
127
+
128
+ NPM scripts are provided for convenient building:
129
+
130
+ ```bash
131
+ # Development build
132
+ npm run dockerbuild
133
+
134
+ # Production build
135
+ npm run dockerbuild:prod
136
+ ```
137
+
138
+ ### 1b. Direct Docker Build Commands (alternative to using NPM scripts)
139
+
140
+ You can use Docker's target feature to specify the build environment instead of using NPM scripts if you wish:
141
+
142
+ ```bash
143
+ # Development build
144
+ docker build . --target bolt-ai-development
145
+
146
+ # Production build
147
+ docker build . --target bolt-ai-production
148
+ ```
149
+
150
+ ### 2. Docker Compose with Profiles to Run the Container
151
+
152
+ Use Docker Compose profiles to manage different environments:
153
+
154
+ ```bash
155
+ # Development environment
156
+ docker-compose --profile development up
157
+
158
+ # Production environment
159
+ docker-compose --profile production up
160
+ ```
161
+
162
+ When you run the Docker Compose command with the development profile, any changes you
163
+ make on your machine to the code will automatically be reflected in the site running
164
+ on the container (i.e. hot reloading still applies!).
165
+
166
+ ## Run Without Docker
167
+
168
+ 1. Install dependencies using Terminal (or CMD in Windows with admin permissions):
169
+
170
+ ```
171
+ pnpm install
172
+ ```
173
+
174
+ If you get an error saying "command not found: pnpm" or similar, then that means pnpm isn't installed. You can install it via this:
175
+
176
+ ```
177
+ sudo npm install -g pnpm
178
+ ```
179
+
180
+ 2. Start the application with the command:
181
+
182
+ ```bash
183
+ pnpm run dev
184
+ ```
185
+
186
+ ## Super Important Note on Running Ollama Models
187
+
188
+ Ollama models by default only have 2048 tokens for their context window. Even for large models that can easily handle way more.
189
+ This is not a large enough window to handle the Bolt.new/oTToDev prompt! You have to create a version of any model you want
190
+ to use where you specify a larger context window. Luckily it's super easy to do that.
191
+
192
+ All you have to do is:
193
+
194
+ - Create a file called "Modelfile" (no file extension) anywhere on your computer
195
+ - Put in the two lines:
196
+
197
+ ```
198
+ FROM [Ollama model ID such as qwen2.5-coder:7b]
199
+ PARAMETER num_ctx 32768
200
+ ```
201
+
202
+ - Run the command:
203
+
204
+ ```
205
+ ollama create -f Modelfile [your new model ID, can be whatever you want (example: qwen2.5-coder-extra-ctx:7b)]
206
+ ```
207
+
208
+ Now you have a new Ollama model that isn't heavily limited in the context length like Ollama models are by default for some reason.
209
+ You'll see this new model in the list of Ollama models along with all the others you pulled!
210
 
211
  ## Adding New LLMs:
212
 
app/lib/.server/llm/api-key.ts CHANGED
@@ -31,6 +31,8 @@ export function getAPIKey(cloudflareEnv: Env, provider: string, userApiKeys?: Re
31
  return env.MISTRAL_API_KEY || cloudflareEnv.MISTRAL_API_KEY;
32
  case "OpenAILike":
33
  return env.OPENAI_LIKE_API_KEY || cloudflareEnv.OPENAI_LIKE_API_KEY;
 
 
34
  default:
35
  return "";
36
  }
@@ -41,7 +43,11 @@ export function getBaseURL(cloudflareEnv: Env, provider: string) {
41
  case 'OpenAILike':
42
  return env.OPENAI_LIKE_API_BASE_URL || cloudflareEnv.OPENAI_LIKE_API_BASE_URL;
43
  case 'Ollama':
44
- return env.OLLAMA_API_BASE_URL || cloudflareEnv.OLLAMA_API_BASE_URL || "http://localhost:11434";
 
 
 
 
45
  default:
46
  return "";
47
  }
 
31
  return env.MISTRAL_API_KEY || cloudflareEnv.MISTRAL_API_KEY;
32
  case "OpenAILike":
33
  return env.OPENAI_LIKE_API_KEY || cloudflareEnv.OPENAI_LIKE_API_KEY;
34
+ case "xAI":
35
+ return env.XAI_API_KEY || cloudflareEnv.XAI_API_KEY;
36
  default:
37
  return "";
38
  }
 
43
  case 'OpenAILike':
44
  return env.OPENAI_LIKE_API_BASE_URL || cloudflareEnv.OPENAI_LIKE_API_BASE_URL;
45
  case 'Ollama':
46
+ let baseUrl = env.OLLAMA_API_BASE_URL || cloudflareEnv.OLLAMA_API_BASE_URL || "http://localhost:11434";
47
+ if (env.RUNNING_IN_DOCKER === 'true') {
48
+ baseUrl = baseUrl.replace("localhost", "host.docker.internal");
49
+ }
50
+ return baseUrl;
51
  default:
52
  return "";
53
  }
app/lib/.server/llm/model.ts CHANGED
@@ -91,7 +91,6 @@ export function getXAIModel(apiKey: string, model: string) {
91
 
92
  return openai(model);
93
  }
94
-
95
  export function getModel(provider: string, model: string, env: Env, apiKeys?: Record<string, string>) {
96
  const apiKey = getAPIKey(env, provider, apiKeys);
97
  const baseURL = getBaseURL(env, provider);
 
91
 
92
  return openai(model);
93
  }
 
94
  export function getModel(provider: string, model: string, env: Env, apiKeys?: Record<string, string>) {
95
  const apiKey = getAPIKey(env, provider, apiKeys);
96
  const baseURL = getBaseURL(env, provider);
docker-compose.yaml ADDED
@@ -0,0 +1,61 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ services:
2
+ bolt-ai:
3
+ image: bolt-ai:production
4
+ build:
5
+ context: .
6
+ dockerfile: Dockerfile
7
+ target: bolt-ai-production
8
+ ports:
9
+ - "5173:5173"
10
+ env_file: ".env.local"
11
+ environment:
12
+ - NODE_ENV=production
13
+ - COMPOSE_PROFILES=production
14
+ # No strictly neded but serving as hints for Coolify
15
+ - PORT=5173
16
+ - GROQ_API_KEY=${GROQ_API_KEY}
17
+ - OPENAI_API_KEY=${OPENAI_API_KEY}
18
+ - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
19
+ - OPEN_ROUTER_API_KEY=${OPEN_ROUTER_API_KEY}
20
+ - GOOGLE_GENERATIVE_AI_API_KEY=${GOOGLE_GENERATIVE_AI_API_KEY}
21
+ - OLLAMA_API_BASE_URL=${OLLAMA_API_BASE_URL}
22
+ - VITE_LOG_LEVEL=${VITE_LOG_LEVEL:-debug}
23
+ - RUNNING_IN_DOCKER=true
24
+ extra_hosts:
25
+ - "host.docker.internal:host-gateway"
26
+ command: pnpm run dockerstart
27
+ profiles:
28
+ - production # This service only runs in the production profile
29
+
30
+ bolt-ai-dev:
31
+ image: bolt-ai:development
32
+ build:
33
+ target: bolt-ai-development
34
+ environment:
35
+ - NODE_ENV=development
36
+ - VITE_HMR_PROTOCOL=ws
37
+ - VITE_HMR_HOST=localhost
38
+ - VITE_HMR_PORT=5173
39
+ - CHOKIDAR_USEPOLLING=true
40
+ - WATCHPACK_POLLING=true
41
+ - PORT=5173
42
+ - GROQ_API_KEY=${GROQ_API_KEY}
43
+ - OPENAI_API_KEY=${OPENAI_API_KEY}
44
+ - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
45
+ - OPEN_ROUTER_API_KEY=${OPEN_ROUTER_API_KEY}
46
+ - GOOGLE_GENERATIVE_AI_API_KEY=${GOOGLE_GENERATIVE_AI_API_KEY}
47
+ - OLLAMA_API_BASE_URL=${OLLAMA_API_BASE_URL}
48
+ - VITE_LOG_LEVEL=${VITE_LOG_LEVEL:-debug}
49
+ - RUNNING_IN_DOCKER=true
50
+ extra_hosts:
51
+ - "host.docker.internal:host-gateway"
52
+ volumes:
53
+ - type: bind
54
+ source: .
55
+ target: /app
56
+ consistency: cached
57
+ - /app/node_modules
58
+ ports:
59
+ - "5173:5173" # Same port, no conflict as only one runs at a time
60
+ command: pnpm run dev --host 0.0.0.0
61
+ profiles: ["development", "default"] # Make development the default profile
docker-compose.yml DELETED
@@ -1,24 +0,0 @@
1
- services:
2
- bolt-app:
3
- build:
4
- context: .
5
- dockerfile: Dockerfile
6
- ports:
7
- - "3000:3000"
8
- environment:
9
- - NODE_ENV=production
10
- # Add any other environment variables your app needs
11
- # - OPENAI_API_KEY=${OPENAI_API_KEY}
12
- # - ANTHROPIC_API_KEY=${ANTHROPIC_API_KEY}
13
- # - GROQ_API_KEY=${GROQ_API_KEY}
14
- # - OPEN_ROUTER_API_KEY=${OPEN_ROUTER_API_KEY}
15
- volumes:
16
- # This volume is for development purposes, allowing live code updates
17
- # Comment out or remove for production
18
- - .:/app
19
- # This volume is to prevent node_modules from being overwritten by the above volume
20
- - /app/node_modules
21
- command: pnpm run start
22
-
23
- volumes:
24
- node_modules:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
package.json CHANGED
@@ -13,7 +13,11 @@
13
  "test:watch": "vitest",
14
  "lint": "eslint --cache --cache-location ./node_modules/.cache/eslint .",
15
  "lint:fix": "npm run lint -- --fix",
16
- "start": "bindings=$(./bindings.sh) && wrangler pages dev ./build/client $bindings --ip 0.0.0.0 --port 3000",
 
 
 
 
17
  "typecheck": "tsc",
18
  "typegen": "wrangler types",
19
  "preview": "pnpm run build && pnpm run start"
@@ -112,5 +116,6 @@
112
  },
113
  "resolutions": {
114
  "@typescript-eslint/utils": "^8.0.0-alpha.30"
115
- }
 
116
  }
 
13
  "test:watch": "vitest",
14
  "lint": "eslint --cache --cache-location ./node_modules/.cache/eslint .",
15
  "lint:fix": "npm run lint -- --fix",
16
+ "start": "bindings=$(./bindings.sh) && wrangler pages dev ./build/client $bindings",
17
+ "dockerstart": "bindings=$(./bindings.sh) && wrangler pages dev ./build/client $bindings --ip 0.0.0.0 --port 5173 --no-show-interactive-dev-session",
18
+ "dockerrun": "docker run -it -d --name bolt-ai-live -p 5173:5173 --env-file .env.local bolt-ai",
19
+ "dockerbuild:prod": "docker build -t bolt-ai:production bolt-ai:latest --target bolt-ai-production .",
20
+ "dockerbuild": "docker build -t bolt-ai:development -t bolt-ai:latest --target bolt-ai-development .",
21
  "typecheck": "tsc",
22
  "typegen": "wrangler types",
23
  "preview": "pnpm run build && pnpm run start"
 
116
  },
117
  "resolutions": {
118
  "@typescript-eslint/utils": "^8.0.0-alpha.30"
119
+ },
120
+ "packageManager": "[email protected]+sha512.22721b3a11f81661ae1ec68ce1a7b879425a1ca5b991c975b074ac220b187ce56c708fe5db69f4c962c989452eee76c82877f4ee80f474cebd61ee13461b6228"
121
  }
wrangler.toml CHANGED
@@ -3,3 +3,4 @@ name = "bolt"
3
  compatibility_flags = ["nodejs_compat"]
4
  compatibility_date = "2024-07-01"
5
  pages_build_output_dir = "./build/client"
 
 
3
  compatibility_flags = ["nodejs_compat"]
4
  compatibility_date = "2024-07-01"
5
  pages_build_output_dir = "./build/client"
6
+ send_metrics = false