chore(doc): add prompt caching to README
Browse files
README.md
CHANGED
|
@@ -31,6 +31,7 @@ https://thinktank.ottomator.ai
|
|
| 31 |
- β
Ability to revert code to earlier version (@wonderwhy-er)
|
| 32 |
- β
Cohere Integration (@hasanraiyan)
|
| 33 |
- β
Dynamic model max token length (@hasanraiyan)
|
|
|
|
| 34 |
- β¬ **HIGH PRIORITY** - Prevent Bolt from rewriting files as often (file locking and diffs)
|
| 35 |
- β¬ **HIGH PRIORITY** - Better prompting for smaller LLMs (code window sometimes doesn't start)
|
| 36 |
- β¬ **HIGH PRIORITY** - Load local projects into the app
|
|
@@ -42,7 +43,6 @@ https://thinktank.ottomator.ai
|
|
| 42 |
- β¬ Perplexity Integration
|
| 43 |
- β¬ Vertex AI Integration
|
| 44 |
- β¬ Deploy directly to Vercel/Netlify/other similar platforms
|
| 45 |
-
- β¬ Prompt caching
|
| 46 |
- β¬ Better prompt enhancing
|
| 47 |
- β¬ Have LLM plan the project in a MD file for better results/transparency
|
| 48 |
- β¬ VSCode Integration with git-like confirmations
|
|
|
|
| 31 |
- β
Ability to revert code to earlier version (@wonderwhy-er)
|
| 32 |
- β
Cohere Integration (@hasanraiyan)
|
| 33 |
- β
Dynamic model max token length (@hasanraiyan)
|
| 34 |
+
- β
Prompt caching (@SujalXplores)
|
| 35 |
- β¬ **HIGH PRIORITY** - Prevent Bolt from rewriting files as often (file locking and diffs)
|
| 36 |
- β¬ **HIGH PRIORITY** - Better prompting for smaller LLMs (code window sometimes doesn't start)
|
| 37 |
- β¬ **HIGH PRIORITY** - Load local projects into the app
|
|
|
|
| 43 |
- β¬ Perplexity Integration
|
| 44 |
- β¬ Vertex AI Integration
|
| 45 |
- β¬ Deploy directly to Vercel/Netlify/other similar platforms
|
|
|
|
| 46 |
- β¬ Better prompt enhancing
|
| 47 |
- β¬ Have LLM plan the project in a MD file for better results/transparency
|
| 48 |
- β¬ VSCode Integration with git-like confirmations
|