leex279 commited on
Commit
7f540b5
·
unverified ·
1 Parent(s): e196442

Update README.md

Browse files

- Enhanced text for bolt.diy docs section and better visibility to guide people there instead using github readme which is more for devs
- added NodeJS based applications, as this is not clear and some people asked about in the community

Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -3,8 +3,10 @@
3
 
4
  Welcome to bolt.diy, the official open source version of Bolt.new (previously known as oTToDev and bolt.new ANY LLM), which allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
5
 
6
- Check the [bolt.diy Docs](https://stackblitz-labs.github.io/bolt.diy/) for more information.
 
7
 
 
8
  Also [this pinned post in our community](https://thinktank.ottomator.ai/t/videos-tutorial-helpful-content/3243) has a bunch of incredible resources for running and deploying bolt.diy yourself!
9
 
10
  We have also launched an experimental agent called the "bolt.diy Expert" that can answer common questions about bolt.diy. Find it here on the [oTTomator Live Agent Studio](https://studio.ottomator.ai/).
@@ -91,7 +93,7 @@ project, please check the [project management guide](./PROJECT.md) to get starte
91
 
92
  ## Features
93
 
94
- - **AI-powered full-stack web development** directly in your browser.
95
  - **Support for multiple LLMs** with an extensible architecture to integrate additional models.
96
  - **Attach images to prompts** for better contextual understanding.
97
  - **Integrated terminal** to view output of LLM-run commands.
 
3
 
4
  Welcome to bolt.diy, the official open source version of Bolt.new (previously known as oTToDev and bolt.new ANY LLM), which allows you to choose the LLM that you use for each prompt! Currently, you can use OpenAI, Anthropic, Ollama, OpenRouter, Gemini, LMStudio, Mistral, xAI, HuggingFace, DeepSeek, or Groq models - and it is easily extended to use any other model supported by the Vercel AI SDK! See the instructions below for running this locally and extending it to include more models.
5
 
6
+ -----
7
+ Check the [bolt.diy Docs](https://stackblitz-labs.github.io/bolt.diy/) for more offical installation instructions and more informations.
8
 
9
+ -----
10
  Also [this pinned post in our community](https://thinktank.ottomator.ai/t/videos-tutorial-helpful-content/3243) has a bunch of incredible resources for running and deploying bolt.diy yourself!
11
 
12
  We have also launched an experimental agent called the "bolt.diy Expert" that can answer common questions about bolt.diy. Find it here on the [oTTomator Live Agent Studio](https://studio.ottomator.ai/).
 
93
 
94
  ## Features
95
 
96
+ - **AI-powered full-stack web development** for **NodeJS based applications** directly in your browser.
97
  - **Support for multiple LLMs** with an extensible architecture to integrate additional models.
98
  - **Attach images to prompts** for better contextual understanding.
99
  - **Integrated terminal** to view output of LLM-run commands.