LlamaFinetuneGGUF commited on
Commit
7e70dc5
Β·
1 Parent(s): 2638c1a

docs: updated Contributing

Browse files

updated Contributing in the docs
updated Contributing and FAQ in the GitHub part as well

Files changed (3) hide show
  1. CONTRIBUTING.md +159 -157
  2. FAQ.md +56 -28
  3. docs/docs/CONTRIBUTING.md +128 -155
CONTRIBUTING.md CHANGED
@@ -1,217 +1,219 @@
1
- # Contributing to bolt.diy
2
 
3
- First off, thank you for considering contributing to bolt.diy! This fork aims to expand the capabilities of the original project by integrating multiple LLM providers and enhancing functionality. Every contribution helps make bolt.diy a better tool for developers worldwide.
 
 
4
 
5
  ## πŸ“‹ Table of Contents
6
- - [Code of Conduct](#code-of-conduct)
7
- - [How Can I Contribute?](#how-can-i-contribute)
8
- - [Pull Request Guidelines](#pull-request-guidelines)
9
- - [Coding Standards](#coding-standards)
10
- - [Development Setup](#development-setup)
11
- - [Deploymnt with Docker](#docker-deployment-documentation)
12
- - [Project Structure](#project-structure)
13
-
14
- ## Code of Conduct
15
-
16
- This project and everyone participating in it is governed by our Code of Conduct. By participating, you are expected to uphold this code. Please report unacceptable behavior to the project maintainers.
17
-
18
- ## How Can I Contribute?
19
-
20
- ### 🐞 Reporting Bugs and Feature Requests
21
- - Check the issue tracker to avoid duplicates
22
- - Use the issue templates when available
23
- - Include as much relevant information as possible
24
- - For bugs, add steps to reproduce the issue
25
-
26
- ### πŸ”§ Code Contributions
27
- 1. Fork the repository
28
- 2. Create a new branch for your feature/fix
29
- 3. Write your code
30
- 4. Submit a pull request
31
-
32
- ### ✨ Becoming a Core Contributor
33
- We're looking for dedicated contributors to help maintain and grow this project. If you're interested in becoming a core contributor, please fill out our [Contributor Application Form](https://forms.gle/TBSteXSDCtBDwr5m7).
34
-
35
- ## Pull Request Guidelines
36
-
37
- ### πŸ“ PR Checklist
38
- - [ ] Branch from the main branch
39
- - [ ] Update documentation if needed
40
- - [ ] Manually verify all new functionality works as expected
41
- - [ ] Keep PRs focused and atomic
42
-
43
- ### πŸ‘€ Review Process
44
- 1. Manually test the changes
45
- 2. At least one maintainer review required
46
- 3. Address all review comments
47
- 4. Maintain clean commit history
48
-
49
- ## Coding Standards
50
-
51
- ### πŸ’» General Guidelines
52
- - Follow existing code style
53
- - Comment complex logic
54
- - Keep functions focused and small
55
- - Use meaningful variable names
56
- - Lint your code. This repo contains a pre-commit-hook that will verify your code is linted properly,
57
- so set up your IDE to do that for you!
58
-
59
- ## Development Setup
60
-
61
- ### πŸ”„ Initial Setup
62
- 1. Clone the repository:
63
- ```bash
64
- git clone https://github.com/coleam00/bolt.new-any-llm.git
65
- ```
66
 
67
- 2. Install dependencies:
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
68
  ```bash
69
- pnpm install
70
  ```
 
71
 
72
- 3. Set up environment variables:
73
- - Rename `.env.example` to `.env.local`
74
- - Add your LLM API keys (only set the ones you plan to use):
75
- ```bash
76
- GROQ_API_KEY=XXX
77
- HuggingFace_API_KEY=XXX
78
- OPENAI_API_KEY=XXX
79
- ANTHROPIC_API_KEY=XXX
80
- ...
81
- ```
82
- - Optionally set debug level:
83
- ```bash
84
- VITE_LOG_LEVEL=debug
85
- ```
86
 
87
- - Optionally set context size:
88
  ```bash
89
- DEFAULT_NUM_CTX=32768
90
  ```
91
 
92
- Some Example Context Values for the qwen2.5-coder:32b models are.
93
-
94
- * DEFAULT_NUM_CTX=32768 - Consumes 36GB of VRAM
95
- * DEFAULT_NUM_CTX=24576 - Consumes 32GB of VRAM
96
- * DEFAULT_NUM_CTX=12288 - Consumes 26GB of VRAM
97
- * DEFAULT_NUM_CTX=6144 - Consumes 24GB of VRAM
98
 
99
- **Important**: Never commit your `.env.local` file to version control. It's already included in .gitignore.
100
 
101
- ### πŸš€ Running the Development Server
102
  ```bash
103
- pnpm run dev
104
  ```
 
105
 
106
- **Note**: You will need Google Chrome Canary to run this locally if you use Chrome! It's an easy install and a good browser for web development anyway.
107
 
108
- ## Testing
109
 
110
- Run the test suite with:
111
 
112
- ```bash
113
- pnpm test
114
- ```
115
 
116
- ## Deployment
117
 
118
- To deploy the application to Cloudflare Pages:
119
 
 
120
  ```bash
121
- pnpm run deploy
 
122
  ```
123
 
124
- Make sure you have the necessary permissions and Wrangler is correctly configured for your Cloudflare account.
 
 
 
125
 
126
- # Docker Deployment Documentation
 
 
 
127
 
128
- This guide outlines various methods for building and deploying the application using Docker.
 
 
 
129
 
130
- ## Build Methods
131
 
132
- ### 1. Using Helper Scripts
133
 
134
- NPM scripts are provided for convenient building:
135
 
 
136
  ```bash
137
- # Development build
138
- npm run dockerbuild
139
-
140
  # Production build
141
  npm run dockerbuild:prod
142
  ```
143
 
144
- ### 2. Direct Docker Build Commands
145
-
146
- You can use Docker's target feature to specify the build environment:
147
-
148
  ```bash
149
- # Development build
150
- docker build . --target bolt-ai-development
151
-
152
- # Production build
153
  docker build . --target bolt-ai-production
154
  ```
155
 
156
- ### 3. Docker Compose with Profiles
157
-
158
- Use Docker Compose profiles to manage different environments:
159
-
160
  ```bash
161
- # Development environment
162
- docker-compose --profile development up
163
-
164
- # Production environment
165
  docker-compose --profile production up
166
  ```
167
 
168
- ## Running the Application
169
-
170
- After building using any of the methods above, run the container with:
171
-
172
  ```bash
173
- # Development
174
- docker run -p 5173:5173 --env-file .env.local bolt-ai:development
175
-
176
- # Production
177
  docker run -p 5173:5173 --env-file .env.local bolt-ai:production
178
  ```
179
 
180
- ## Deployment with Coolify
181
 
182
- [Coolify](https://github.com/coollabsio/coolify) provides a straightforward deployment process:
183
 
184
- 1. Import your Git repository as a new project
185
- 2. Select your target environment (development/production)
186
- 3. Choose "Docker Compose" as the Build Pack
187
- 4. Configure deployment domains
188
- 5. Set the custom start command:
189
  ```bash
190
  docker compose --profile production up
191
  ```
192
- 6. Configure environment variables
193
- - Add necessary AI API keys
194
- - Adjust other environment variables as needed
195
- 7. Deploy the application
196
 
197
- ## VS Code Integration
 
 
198
 
199
- The `docker-compose.yaml` configuration is compatible with VS Code dev containers:
200
 
201
- 1. Open the command palette in VS Code
202
- 2. Select the dev container configuration
203
- 3. Choose the "development" profile from the context menu
204
 
205
- ## Environment Files
 
 
 
206
 
207
- Ensure you have the appropriate `.env.local` file configured before running the containers. This file should contain:
208
- - API keys
209
- - Environment-specific configurations
210
- - Other required environment variables
211
 
212
- ## Notes
213
 
214
- - Port 5173 is exposed and mapped for both development and production environments
215
- - Environment variables are loaded from `.env.local`
216
- - Different profiles (development/production) can be used for different deployment scenarios
217
- - The configuration supports both local development and production deployment
 
 
 
 
 
1
+ # Contribution Guidelines
2
 
3
+ Welcome! This guide provides all the details you need to contribute effectively to the project. Thank you for helping us make **bolt.diy** a better tool for developers worldwide. πŸ’‘
4
+
5
+ ---
6
 
7
  ## πŸ“‹ Table of Contents
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
 
9
+ 1. [Code of Conduct](#code-of-conduct)
10
+ 2. [How Can I Contribute?](#how-can-i-contribute)
11
+ 3. [Pull Request Guidelines](#pull-request-guidelines)
12
+ 4. [Coding Standards](#coding-standards)
13
+ 5. [Development Setup](#development-setup)
14
+ 6. [Testing](#testing)
15
+ 7. [Deployment](#deployment)
16
+ 8. [Docker Deployment](#docker-deployment)
17
+ 9. [VS Code Dev Containers Integration](#vs-code-dev-containers-integration)
18
+
19
+ ---
20
+
21
+ ## πŸ›‘οΈ Code of Conduct
22
+
23
+ This project is governed by our **Code of Conduct**. By participating, you agree to uphold this code. Report unacceptable behavior to the project maintainers.
24
+
25
+ ---
26
+
27
+ ## πŸ› οΈ How Can I Contribute?
28
+
29
+ ### 1️⃣ Reporting Bugs or Feature Requests
30
+ - Check the [issue tracker](#) to avoid duplicates.
31
+ - Use issue templates (if available).
32
+ - Provide detailed, relevant information and steps to reproduce bugs.
33
+
34
+ ### 2️⃣ Code Contributions
35
+ 1. Fork the repository.
36
+ 2. Create a feature or fix branch.
37
+ 3. Write and test your code.
38
+ 4. Submit a pull request (PR).
39
+
40
+ ### 3️⃣ Join as a Core Contributor
41
+ Interested in maintaining and growing the project? Fill out our [Contributor Application Form](https://forms.gle/TBSteXSDCtBDwr5m7).
42
+
43
+ ---
44
+
45
+ ## βœ… Pull Request Guidelines
46
+
47
+ ### PR Checklist
48
+ - Branch from the **main** branch.
49
+ - Update documentation, if needed.
50
+ - Test all functionality manually.
51
+ - Focus on one feature/bug per PR.
52
+
53
+ ### Review Process
54
+ 1. Manual testing by reviewers.
55
+ 2. At least one maintainer review required.
56
+ 3. Address review comments.
57
+ 4. Maintain a clean commit history.
58
+
59
+ ---
60
+
61
+ ## πŸ“ Coding Standards
62
+
63
+ ### General Guidelines
64
+ - Follow existing code style.
65
+ - Comment complex logic.
66
+ - Keep functions small and focused.
67
+ - Use meaningful variable names.
68
+
69
+ ---
70
+
71
+ ## πŸ–₯️ Development Setup
72
+
73
+ ### 1️⃣ Initial Setup
74
+ - Clone the repository:
75
+ ```bash
76
+ git clone https://github.com/stackblitz-labs/bolt.diy.git
77
+ ```
78
+ - Install dependencies:
79
+ ```bash
80
+ pnpm install
81
+ ```
82
+ - Set up environment variables:
83
+ 1. Rename `.env.example` to `.env.local`.
84
+ 2. Add your API keys:
85
+ ```bash
86
+ GROQ_API_KEY=XXX
87
+ HuggingFace_API_KEY=XXX
88
+ OPENAI_API_KEY=XXX
89
+ ...
90
+ ```
91
+ 3. Optionally set:
92
+ - Debug level: `VITE_LOG_LEVEL=debug`
93
+ - Context size: `DEFAULT_NUM_CTX=32768`
94
+
95
+ **Note**: Never commit your `.env.local` file to version control. It’s already in `.gitignore`.
96
+
97
+ ### 2️⃣ Run Development Server
98
  ```bash
99
+ pnpm run dev
100
  ```
101
+ **Tip**: Use **Google Chrome Canary** for local testing.
102
 
103
+ ---
104
+
105
+ ## πŸ§ͺ Testing
 
 
 
 
 
 
 
 
 
 
 
106
 
107
+ Run the test suite with:
108
  ```bash
109
+ pnpm test
110
  ```
111
 
112
+ ---
 
 
 
 
 
113
 
114
+ ## πŸš€ Deployment
115
 
116
+ ### Deploy to Cloudflare Pages
117
  ```bash
118
+ pnpm run deploy
119
  ```
120
+ Ensure you have required permissions and that Wrangler is configured.
121
 
122
+ ---
123
 
124
+ ## 🐳 Docker Deployment
125
 
126
+ This section outlines the methods for deploying the application using Docker. The processes for **Development** and **Production** are provided separately for clarity.
127
 
128
+ ---
 
 
129
 
130
+ ### πŸ§‘β€πŸ’» Development Environment
131
 
132
+ #### Build Options
133
 
134
+ **Option 1: Helper Scripts**
135
  ```bash
136
+ # Development build
137
+ npm run dockerbuild
138
  ```
139
 
140
+ **Option 2: Direct Docker Build Command**
141
+ ```bash
142
+ docker build . --target bolt-ai-development
143
+ ```
144
 
145
+ **Option 3: Docker Compose Profile**
146
+ ```bash
147
+ docker-compose --profile development up
148
+ ```
149
 
150
+ #### Running the Development Container
151
+ ```bash
152
+ docker run -p 5173:5173 --env-file .env.local bolt-ai:development
153
+ ```
154
 
155
+ ---
156
 
157
+ ### 🏭 Production Environment
158
 
159
+ #### Build Options
160
 
161
+ **Option 1: Helper Scripts**
162
  ```bash
 
 
 
163
  # Production build
164
  npm run dockerbuild:prod
165
  ```
166
 
167
+ **Option 2: Direct Docker Build Command**
 
 
 
168
  ```bash
 
 
 
 
169
  docker build . --target bolt-ai-production
170
  ```
171
 
172
+ **Option 3: Docker Compose Profile**
 
 
 
173
  ```bash
 
 
 
 
174
  docker-compose --profile production up
175
  ```
176
 
177
+ #### Running the Production Container
 
 
 
178
  ```bash
 
 
 
 
179
  docker run -p 5173:5173 --env-file .env.local bolt-ai:production
180
  ```
181
 
182
+ ---
183
 
184
+ ### Coolify Deployment
185
 
186
+ For an easy deployment process, use [Coolify](https://github.com/coollabsio/coolify):
187
+ 1. Import your Git repository into Coolify.
188
+ 2. Choose **Docker Compose** as the build pack.
189
+ 3. Configure environment variables (e.g., API keys).
190
+ 4. Set the start command:
191
  ```bash
192
  docker compose --profile production up
193
  ```
 
 
 
 
194
 
195
+ ---
196
+
197
+ ## πŸ› οΈ VS Code Dev Containers Integration
198
 
199
+ The `docker-compose.yaml` configuration is compatible with **VS Code Dev Containers**, making it easy to set up a development environment directly in Visual Studio Code.
200
 
201
+ ### Steps to Use Dev Containers
 
 
202
 
203
+ 1. Open the command palette in VS Code (`Ctrl+Shift+P` or `Cmd+Shift+P` on macOS).
204
+ 2. Select **Dev Containers: Reopen in Container**.
205
+ 3. Choose the **development** profile when prompted.
206
+ 4. VS Code will rebuild the container and open it with the pre-configured environment.
207
 
208
+ ---
 
 
 
209
 
210
+ ## πŸ”‘ Environment Variables
211
 
212
+ Ensure `.env.local` is configured correctly with:
213
+ - API keys.
214
+ - Context-specific configurations.
215
+
216
+ Example for the `DEFAULT_NUM_CTX` variable:
217
+ ```bash
218
+ DEFAULT_NUM_CTX=24576 # Uses 32GB VRAM
219
+ ```
FAQ.md CHANGED
@@ -1,8 +1,7 @@
1
- [![bolt.diy: AI-Powered Full-Stack Web Development in the Browser](./public/social_preview_index.jpg)](https://bolt.diy)
2
 
3
- # bolt.diy
4
-
5
- ## Recommended Models for bolt.diy
6
 
7
  For the best experience with bolt.diy, we recommend using the following models:
8
 
@@ -13,51 +12,80 @@ For the best experience with bolt.diy, we recommend using the following models:
13
  - **Qwen 2.5 Coder 32b**: Best model for self-hosting with reasonable hardware requirements
14
 
15
  **Note**: Models with less than 7b parameters typically lack the capability to properly interact with bolt!
 
16
 
17
- ## FAQ
 
18
 
19
- ### How do I get the best results with bolt.diy?
 
20
 
21
- - **Be specific about your stack**: If you want to use specific frameworks or libraries (like Astro, Tailwind, ShadCN, or any other popular JavaScript framework), mention them in your initial prompt to ensure bolt scaffolds the project accordingly.
 
22
 
23
- - **Use the enhance prompt icon**: Before sending your prompt, try clicking the 'enhance' icon to have the AI model help you refine your prompt, then edit the results before submitting.
 
24
 
25
- - **Scaffold the basics first, then add features**: Make sure the basic structure of your application is in place before diving into more advanced functionality. This helps Bolt.diy understand the foundation of your project and ensure everything is wired up right before building out more advanced functionality.
 
 
 
26
 
27
- - **Batch simple instructions**: Save time by combining simple instructions into one message. For example, you can ask Bolt.diy to change the color scheme, add mobile responsiveness, and restart the dev server, all in one go saving you time and reducing API credit consumption significantly.
 
28
 
29
- ### Why are there so many open issues/pull requests?
 
30
 
31
- bolt.diy was started simply to showcase how to edit an open source project and to do something cool with local LLMs on my (@ColeMedin) YouTube channel! However, it quickly grew into a massive community project that I am working hard to keep up with the demand of by forming a team of maintainers and getting as many people involved as I can. That effort is going well and all of our maintainers are ABSOLUTE rockstars, but it still takes time to organize everything so we can efficiently get through all the issues and PRs. But rest assured, we are working hard and even working on some partnerships behind the scenes to really help this project take off!
 
32
 
33
- ### How do local LLMs fair compared to larger models like Claude 3.5 Sonnet for bolt.diy/bolt.new?
 
 
34
 
35
- As much as the gap is quickly closing between open source and massive close source models, you’re still going to get the best results with the very large models like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b. This is one of the big tasks we have at hand - figuring out how to prompt better, use agents, and improve the platform as a whole to make it work better for even the smaller local LLMs!
 
36
 
37
- ### I'm getting the error: "There was an error processing this request"
38
 
39
- If you see this error within bolt.diy, that is just the application telling you there is a problem at a high level, and this could mean a number of different things. To find the actual error, please check BOTH the terminal where you started the application (with Docker or pnpm) and the developer console in the browser. For most browsers, you can access the developer console by pressing F12 or right clicking anywhere in the browser and selecting β€œInspect”. Then go to the β€œconsole” tab in the top right.
 
40
 
41
- ### I'm getting the error: "x-api-key header missing"
 
42
 
43
- We have seen this error a couple times and for some reason just restarting the Docker container has fixed it. This seems to be Ollama specific. Another thing to try is try to run bolt.diy with Docker or pnpm, whichever you didn’t run first. We are still on the hunt for why this happens once and a while!
 
44
 
45
- ### I'm getting a blank preview when bolt.diy runs my app!
 
46
 
47
- We promise you that we are constantly testing new PRs coming into bolt.diy and the preview is core functionality, so the application is not broken! When you get a blank preview or don’t get a preview, this is generally because the LLM hallucinated bad code or incorrect commands. We are working on making this more transparent so it is obvious. Sometimes the error will appear in developer console too so check that as well.
 
 
 
48
 
49
- ### Everything works but the results are bad
 
 
50
 
51
- This goes to the point above about how local LLMs are getting very powerful but you still are going to see better (sometimes much better) results with the largest LLMs like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b. If you are using smaller LLMs like Qwen-2.5-Coder, consider it more experimental and educational at this point. It can build smaller applications really well, which is super impressive for a local LLM, but for larger scale applications you want to use the larger LLMs still!
 
 
 
 
52
 
53
- ### Received structured exception #0xc0000005: access violation
 
54
 
 
55
  If you are getting this, you are probably on Windows. The fix is generally to update the [Visual C++ Redistributable](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170)
56
 
57
- ### How to add an LLM:
58
-
59
- To make new LLMs available to use in this version of bolt.new, head on over to `app/utils/constants.ts` and find the constant MODEL_LIST. Each element in this array is an object that has the model ID for the name (get this from the provider's API documentation), a label for the frontend model dropdown, and the provider.
60
 
61
- By default, many providers are already implemented, but the YouTube video for this repo covers how to extend this to work with more providers if you wish!
62
 
63
- When you add a new model to the MODEL_LIST array, it will immediately be available to use when you run the app locally or reload it.
 
1
+ # Frequently Asked Questions (FAQ)
2
 
3
+ <details>
4
+ <summary><strong>What are the best models for bolt.diy?</strong></summary>
 
5
 
6
  For the best experience with bolt.diy, we recommend using the following models:
7
 
 
12
  - **Qwen 2.5 Coder 32b**: Best model for self-hosting with reasonable hardware requirements
13
 
14
  **Note**: Models with less than 7b parameters typically lack the capability to properly interact with bolt!
15
+ </details>
16
 
17
+ <details>
18
+ <summary><strong>How do I get the best results with bolt.diy?</strong></summary>
19
 
20
+ - **Be specific about your stack**:
21
+ Mention the frameworks or libraries you want to use (e.g., Astro, Tailwind, ShadCN) in your initial prompt. This ensures that bolt.diy scaffolds the project according to your preferences.
22
 
23
+ - **Use the enhance prompt icon**:
24
+ Before sending your prompt, click the *enhance* icon to let the AI refine your prompt. You can edit the suggested improvements before submitting.
25
 
26
+ - **Scaffold the basics first, then add features**:
27
+ Ensure the foundational structure of your application is in place before introducing advanced functionality. This helps bolt.diy establish a solid base to build on.
28
 
29
+ - **Batch simple instructions**:
30
+ Combine simple tasks into a single prompt to save time and reduce API credit consumption. For example:
31
+ *"Change the color scheme, add mobile responsiveness, and restart the dev server."*
32
+ </details>
33
 
34
+ <details>
35
+ <summary><strong>How do I contribute to bolt.diy?</strong></summary>
36
 
37
+ Check out our [Contribution Guide](CONTRIBUTING.md) for more details on how to get involved!
38
+ </details>
39
 
40
+ <details>
41
+ <summary><strong>What are the future plans for bolt.diy?</strong></summary>
42
 
43
+ Visit our [Roadmap](https://roadmap.sh/r/ottodev-roadmap-2ovzo) for the latest updates.
44
+ New features and improvements are on the way!
45
+ </details>
46
 
47
+ <details>
48
+ <summary><strong>Why are there so many open issues/pull requests?</strong></summary>
49
 
50
+ bolt.diy began as a small showcase project on @ColeMedin's YouTube channel to explore editing open-source projects with local LLMs. However, it quickly grew into a massive community effort!
51
 
52
+ We're forming a team of maintainers to manage demand and streamline issue resolution. The maintainers are rockstars, and we're also exploring partnerships to help the project thrive.
53
+ </details>
54
 
55
+ <details>
56
+ <summary><strong>How do local LLMs compare to larger models like Claude 3.5 Sonnet for bolt.diy?</strong></summary>
57
 
58
+ While local LLMs are improving rapidly, larger models like GPT-4o, Claude 3.5 Sonnet, and DeepSeek Coder V2 236b still offer the best results for complex applications. Our ongoing focus is to improve prompts, agents, and the platform to better support smaller local LLMs.
59
+ </details>
60
 
61
+ <details>
62
+ <summary><strong>Common Errors and Troubleshooting</strong></summary>
63
 
64
+ ### **"There was an error processing this request"**
65
+ This generic error message means something went wrong. Check both:
66
+ - The terminal (if you started the app with Docker or `pnpm`).
67
+ - The developer console in your browser (press `F12` or right-click > *Inspect*, then go to the *Console* tab).
68
 
69
+ ### **"x-api-key header missing"**
70
+ This error is sometimes resolved by restarting the Docker container.
71
+ If that doesn't work, try switching from Docker to `pnpm` or vice versa. We're actively investigating this issue.
72
 
73
+ ### **Blank preview when running the app**
74
+ A blank preview often occurs due to hallucinated bad code or incorrect commands.
75
+ To troubleshoot:
76
+ - Check the developer console for errors.
77
+ - Remember, previews are core functionality, so the app isn't broken! We're working on making these errors more transparent.
78
 
79
+ ### **"Everything works, but the results are bad"**
80
+ Local LLMs like Qwen-2.5-Coder are powerful for small applications but still experimental for larger projects. For better results, consider using larger models like GPT-4o, Claude 3.5 Sonnet, or DeepSeek Coder V2 236b.
81
 
82
+ ### **"Received structured exception #0xc0000005: access violation"**
83
  If you are getting this, you are probably on Windows. The fix is generally to update the [Visual C++ Redistributable](https://learn.microsoft.com/en-us/cpp/windows/latest-supported-vc-redist?view=msvc-170)
84
 
85
+ ### **"Miniflare or Wrangler errors in Windows"**
86
+ You will need to make sure you have the latest version of Visual Studio C++ installed (14.40.33816), more information here https://github.com/stackblitz-labs/bolt.diy/issues/19.
87
+ </details>
88
 
89
+ ---
90
 
91
+ Got more questions? Feel free to reach out or open an issue in our GitHub repo!
docs/docs/CONTRIBUTING.md CHANGED
@@ -1,246 +1,219 @@
1
  # Contribution Guidelines
2
 
3
- ## πŸ“‹ Table of Contents
4
- - [Code of Conduct](#code-of-conduct)
5
- - [How Can I Contribute?](#how-can-i-contribute)
6
- - [Pull Request Guidelines](#pull-request-guidelines)
7
- - [Coding Standards](#coding-standards)
8
- - [Development Setup](#development-setup)
9
- - [Deploymnt with Docker](#docker-deployment-documentation)
10
 
11
  ---
12
 
13
- ## Code of Conduct
14
 
15
- This project and everyone participating in it is governed by our Code of Conduct. By participating, you are expected to uphold this code. Please report unacceptable behavior to the project maintainers.
 
 
 
 
 
 
 
 
16
 
17
  ---
18
 
19
- ## How Can I Contribute?
20
-
21
- ### 🐞 Reporting Bugs and Feature Requests
22
- - Check the issue tracker to avoid duplicates
23
- - Use the issue templates when available
24
- - Include as much relevant information as possible
25
- - For bugs, add steps to reproduce the issue
26
-
27
- ### πŸ”§ Code Contributions
28
- 1. Fork the repository
29
- 2. Create a new branch for your feature/fix
30
- 3. Write your code
31
- 4. Submit a pull request
32
 
33
- ### ✨ Becoming a Core Contributor
34
- We're looking for dedicated contributors to help maintain and grow this project. If you're interested in becoming a core contributor, please fill out our [Contributor Application Form](https://forms.gle/TBSteXSDCtBDwr5m7).
35
 
36
  ---
37
 
38
- ## Pull Request Guidelines
39
 
40
- ### πŸ“ PR Checklist
41
- - [ ] Branch from the main branch
42
- - [ ] Update documentation if needed
43
- - [ ] Manually verify all new functionality works as expected
44
- - [ ] Keep PRs focused and atomic
45
 
46
- ### πŸ‘€ Review Process
47
- 1. Manually test the changes
48
- 2. At least one maintainer review required
49
- 3. Address all review comments
50
- 4. Maintain clean commit history
51
 
52
- ---
53
-
54
- ## Coding Standards
55
-
56
- ### πŸ’» General Guidelines
57
- - Follow existing code style
58
- - Comment complex logic
59
- - Keep functions focused and small
60
- - Use meaningful variable names
61
 
62
  ---
63
 
64
- ## Development Setup
65
 
66
- ### πŸ”„ Initial Setup
67
- 1. Clone the repository:
68
- ```bash
69
- git clone https://github.com/stackblitz-labs/bolt.diy.git
70
- ```
71
 
72
- 2. Install dependencies:
73
- ```bash
74
- pnpm install
75
- ```
 
76
 
77
- 3. Set up environment variables:
78
- - Rename `.env.example` to `.env.local`
79
- - Add your LLM API keys (only set the ones you plan to use):
80
- ```bash
81
- GROQ_API_KEY=XXX
82
- HuggingFace_API_KEY=XXX
83
- OPENAI_API_KEY=XXX
84
- ANTHROPIC_API_KEY=XXX
85
- ...
86
- ```
87
- - Optionally set debug level:
88
- ```bash
89
- VITE_LOG_LEVEL=debug
90
- ```
91
 
92
- - Optionally set context size:
93
- ```bash
94
- DEFAULT_NUM_CTX=32768
95
- ```
96
 
97
- Some Example Context Values for the qwen2.5-coder:32b models are.
98
-
99
- * DEFAULT_NUM_CTX=32768 - Consumes 36GB of VRAM
100
- * DEFAULT_NUM_CTX=24576 - Consumes 32GB of VRAM
101
- * DEFAULT_NUM_CTX=12288 - Consumes 26GB of VRAM
102
- * DEFAULT_NUM_CTX=6144 - Consumes 24GB of VRAM
103
 
104
- **Important**: Never commit your `.env.local` file to version control. It's already included in .gitignore.
105
 
106
- ### πŸš€ Running the Development Server
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
107
  ```bash
108
  pnpm run dev
109
  ```
110
-
111
- **Note**: You will need Google Chrome Canary to run this locally if you use Chrome! It's an easy install and a good browser for web development anyway.
112
 
113
  ---
114
 
115
- ## Testing
116
-
117
- Run the test suite with:
118
 
 
119
  ```bash
120
  pnpm test
121
  ```
122
 
123
  ---
124
 
125
- ## Deployment
126
-
127
- To deploy the application to Cloudflare Pages:
128
 
 
129
  ```bash
130
  pnpm run deploy
131
  ```
132
-
133
- Make sure you have the necessary permissions and Wrangler is correctly configured for your Cloudflare account.
134
 
135
  ---
136
 
137
- # Docker Deployment Documentation
138
 
139
- This guide outlines various methods for building and deploying the application using Docker.
140
 
141
- ## Build Methods
142
 
143
- ### 1. Using Helper Scripts
144
 
145
- NPM scripts are provided for convenient building:
146
 
 
147
  ```bash
148
  # Development build
149
  npm run dockerbuild
150
-
151
- # Production build
152
- npm run dockerbuild:prod
153
  ```
154
 
155
- ### 2. Direct Docker Build Commands
156
-
157
- You can use Docker's target feature to specify the build environment:
158
-
159
  ```bash
160
- # Development build
161
  docker build . --target bolt-ai-development
162
-
163
- # Production build
164
- docker build . --target bolt-ai-production
165
  ```
166
 
167
- ### 3. Docker Compose with Profiles
168
-
169
- Use Docker Compose profiles to manage different environments:
170
-
171
  ```bash
172
- # Development environment
173
  docker-compose --profile development up
 
174
 
175
- # Production environment
176
- docker-compose --profile production up
 
177
  ```
178
 
179
  ---
180
 
181
- ## Running the Application
182
 
183
- After building using any of the methods above, run the container with:
184
 
 
185
  ```bash
186
- # Development
187
- docker run -p 5173:5173 --env-file .env.local bolt-ai:development
 
 
 
 
 
 
 
 
 
 
 
188
 
189
- # Production
 
190
  docker run -p 5173:5173 --env-file .env.local bolt-ai:production
191
  ```
192
 
193
  ---
194
 
195
- ## Deployment with Coolify
196
 
197
- [Coolify](https://github.com/coollabsio/coolify) provides a straightforward deployment process:
198
-
199
- 1. Import your Git repository as a new project
200
- 2. Select your target environment (development/production)
201
- 3. Choose "Docker Compose" as the Build Pack
202
- 4. Configure deployment domains
203
- 5. Set the custom start command:
204
  ```bash
205
  docker compose --profile production up
206
  ```
207
- 6. Configure environment variables
208
- - Add necessary AI API keys
209
- - Adjust other environment variables as needed
210
- 7. Deploy the application
211
 
212
  ---
213
 
214
- ## VS Code Integration
215
 
216
- The `docker-compose.yaml` configuration is compatible with VS Code dev containers:
217
 
218
- 1. Open the command palette in VS Code
219
- 2. Select the dev container configuration
220
- 3. Choose the "development" profile from the context menu
221
 
222
- ---
223
-
224
- ## Environment Files
225
-
226
- Ensure you have the appropriate `.env.local` file configured before running the containers. This file should contain:
227
- - API keys
228
- - Environment-specific configurations
229
- - Other required environment variables
230
 
231
  ---
232
 
233
- ## DEFAULT_NUM_CTX
234
 
235
- The `DEFAULT_NUM_CTX` environment variable can be used to limit the maximum number of context values used by the qwen2.5-coder model. For example, to limit the context to 24576 values (which uses 32GB of VRAM), set `DEFAULT_NUM_CTX=24576` in your `.env.local` file.
 
 
236
 
237
- First off, thank you for considering contributing to bolt.diy! This fork aims to expand the capabilities of the original project by integrating multiple LLM providers and enhancing functionality. Every contribution helps make bolt.diy a better tool for developers worldwide.
238
-
239
- ---
240
-
241
- ## Notes
242
-
243
- - Port 5173 is exposed and mapped for both development and production environments
244
- - Environment variables are loaded from `.env.local`
245
- - Different profiles (development/production) can be used for different deployment scenarios
246
- - The configuration supports both local development and production deployment
 
1
  # Contribution Guidelines
2
 
3
+ Welcome! This guide provides all the details you need to contribute effectively to the project. Thank you for helping us make **bolt.diy** a better tool for developers worldwide. πŸ’‘
 
 
 
 
 
 
4
 
5
  ---
6
 
7
+ ## πŸ“‹ Table of Contents
8
 
9
+ 1. [Code of Conduct](#code-of-conduct)
10
+ 2. [How Can I Contribute?](#how-can-i-contribute)
11
+ 3. [Pull Request Guidelines](#pull-request-guidelines)
12
+ 4. [Coding Standards](#coding-standards)
13
+ 5. [Development Setup](#development-setup)
14
+ 6. [Testing](#testing)
15
+ 7. [Deployment](#deployment)
16
+ 8. [Docker Deployment](#docker-deployment)
17
+ 9. [VS Code Dev Containers Integration](#vs-code-dev-containers-integration)
18
 
19
  ---
20
 
21
+ ## πŸ›‘οΈ Code of Conduct
 
 
 
 
 
 
 
 
 
 
 
 
22
 
23
+ This project is governed by our **Code of Conduct**. By participating, you agree to uphold this code. Report unacceptable behavior to the project maintainers.
 
24
 
25
  ---
26
 
27
+ ## πŸ› οΈ How Can I Contribute?
28
 
29
+ ### 1️⃣ Reporting Bugs or Feature Requests
30
+ - Check the [issue tracker](#) to avoid duplicates.
31
+ - Use issue templates (if available).
32
+ - Provide detailed, relevant information and steps to reproduce bugs.
 
33
 
34
+ ### 2️⃣ Code Contributions
35
+ 1. Fork the repository.
36
+ 2. Create a feature or fix branch.
37
+ 3. Write and test your code.
38
+ 4. Submit a pull request (PR).
39
 
40
+ ### 3️⃣ Join as a Core Contributor
41
+ Interested in maintaining and growing the project? Fill out our [Contributor Application Form](https://forms.gle/TBSteXSDCtBDwr5m7).
 
 
 
 
 
 
 
42
 
43
  ---
44
 
45
+ ## βœ… Pull Request Guidelines
46
 
47
+ ### PR Checklist
48
+ - Branch from the **main** branch.
49
+ - Update documentation, if needed.
50
+ - Test all functionality manually.
51
+ - Focus on one feature/bug per PR.
52
 
53
+ ### Review Process
54
+ 1. Manual testing by reviewers.
55
+ 2. At least one maintainer review required.
56
+ 3. Address review comments.
57
+ 4. Maintain a clean commit history.
58
 
59
+ ---
 
 
 
 
 
 
 
 
 
 
 
 
 
60
 
61
+ ## πŸ“ Coding Standards
 
 
 
62
 
63
+ ### General Guidelines
64
+ - Follow existing code style.
65
+ - Comment complex logic.
66
+ - Keep functions small and focused.
67
+ - Use meaningful variable names.
 
68
 
69
+ ---
70
 
71
+ ## πŸ–₯️ Development Setup
72
+
73
+ ### 1️⃣ Initial Setup
74
+ - Clone the repository:
75
+ ```bash
76
+ git clone https://github.com/stackblitz-labs/bolt.diy.git
77
+ ```
78
+ - Install dependencies:
79
+ ```bash
80
+ pnpm install
81
+ ```
82
+ - Set up environment variables:
83
+ 1. Rename `.env.example` to `.env.local`.
84
+ 2. Add your API keys:
85
+ ```bash
86
+ GROQ_API_KEY=XXX
87
+ HuggingFace_API_KEY=XXX
88
+ OPENAI_API_KEY=XXX
89
+ ...
90
+ ```
91
+ 3. Optionally set:
92
+ - Debug level: `VITE_LOG_LEVEL=debug`
93
+ - Context size: `DEFAULT_NUM_CTX=32768`
94
+
95
+ **Note**: Never commit your `.env.local` file to version control. It’s already in `.gitignore`.
96
+
97
+ ### 2️⃣ Run Development Server
98
  ```bash
99
  pnpm run dev
100
  ```
101
+ **Tip**: Use **Google Chrome Canary** for local testing.
 
102
 
103
  ---
104
 
105
+ ## πŸ§ͺ Testing
 
 
106
 
107
+ Run the test suite with:
108
  ```bash
109
  pnpm test
110
  ```
111
 
112
  ---
113
 
114
+ ## πŸš€ Deployment
 
 
115
 
116
+ ### Deploy to Cloudflare Pages
117
  ```bash
118
  pnpm run deploy
119
  ```
120
+ Ensure you have required permissions and that Wrangler is configured.
 
121
 
122
  ---
123
 
124
+ ## 🐳 Docker Deployment
125
 
126
+ This section outlines the methods for deploying the application using Docker. The processes for **Development** and **Production** are provided separately for clarity.
127
 
128
+ ---
129
 
130
+ ### πŸ§‘β€πŸ’» Development Environment
131
 
132
+ #### Build Options
133
 
134
+ **Option 1: Helper Scripts**
135
  ```bash
136
  # Development build
137
  npm run dockerbuild
 
 
 
138
  ```
139
 
140
+ **Option 2: Direct Docker Build Command**
 
 
 
141
  ```bash
 
142
  docker build . --target bolt-ai-development
 
 
 
143
  ```
144
 
145
+ **Option 3: Docker Compose Profile**
 
 
 
146
  ```bash
 
147
  docker-compose --profile development up
148
+ ```
149
 
150
+ #### Running the Development Container
151
+ ```bash
152
+ docker run -p 5173:5173 --env-file .env.local bolt-ai:development
153
  ```
154
 
155
  ---
156
 
157
+ ### 🏭 Production Environment
158
 
159
+ #### Build Options
160
 
161
+ **Option 1: Helper Scripts**
162
  ```bash
163
+ # Production build
164
+ npm run dockerbuild:prod
165
+ ```
166
+
167
+ **Option 2: Direct Docker Build Command**
168
+ ```bash
169
+ docker build . --target bolt-ai-production
170
+ ```
171
+
172
+ **Option 3: Docker Compose Profile**
173
+ ```bash
174
+ docker-compose --profile production up
175
+ ```
176
 
177
+ #### Running the Production Container
178
+ ```bash
179
  docker run -p 5173:5173 --env-file .env.local bolt-ai:production
180
  ```
181
 
182
  ---
183
 
184
+ ### Coolify Deployment
185
 
186
+ For an easy deployment process, use [Coolify](https://github.com/coollabsio/coolify):
187
+ 1. Import your Git repository into Coolify.
188
+ 2. Choose **Docker Compose** as the build pack.
189
+ 3. Configure environment variables (e.g., API keys).
190
+ 4. Set the start command:
 
 
191
  ```bash
192
  docker compose --profile production up
193
  ```
 
 
 
 
194
 
195
  ---
196
 
197
+ ## πŸ› οΈ VS Code Dev Containers Integration
198
 
199
+ The `docker-compose.yaml` configuration is compatible with **VS Code Dev Containers**, making it easy to set up a development environment directly in Visual Studio Code.
200
 
201
+ ### Steps to Use Dev Containers
 
 
202
 
203
+ 1. Open the command palette in VS Code (`Ctrl+Shift+P` or `Cmd+Shift+P` on macOS).
204
+ 2. Select **Dev Containers: Reopen in Container**.
205
+ 3. Choose the **development** profile when prompted.
206
+ 4. VS Code will rebuild the container and open it with the pre-configured environment.
 
 
 
 
207
 
208
  ---
209
 
210
+ ## πŸ”‘ Environment Variables
211
 
212
+ Ensure `.env.local` is configured correctly with:
213
+ - API keys.
214
+ - Context-specific configurations.
215
 
216
+ Example for the `DEFAULT_NUM_CTX` variable:
217
+ ```bash
218
+ DEFAULT_NUM_CTX=24576 # Uses 32GB VRAM
219
+ ```