Post
430
Run LLM model Locally using Docker right inside your codebase (No GUI Needed!)
In this project, I did not used the suporting GUI like Open WebUI or LM Studio or any other, so the purpose to use stand alone LLM models with ollama to give you the idea that how you can use it in your project/code instead of running through third party. Everything is containerized with Docker, so setup is clean and repeatable. Its just a fun side project so my connections can learn more about running models locally in their own projects.
Tech stack used:
🐋 Docker
🦙 LLaMA via Ollama
💻 HTML/CSS/JS
🐍 Python + FastAPI
🌐 NGINX
Its still early and a fun side project, but if you are into local model deployment, or just want to see how it works, check it out on the given link!
https://github.com/Imran-ml/llama-chatbot-dockerized
#LLM #Docker #OpenSource #Chatbot #LLaMA #fastapi
In this project, I did not used the suporting GUI like Open WebUI or LM Studio or any other, so the purpose to use stand alone LLM models with ollama to give you the idea that how you can use it in your project/code instead of running through third party. Everything is containerized with Docker, so setup is clean and repeatable. Its just a fun side project so my connections can learn more about running models locally in their own projects.
Tech stack used:
🐋 Docker
🦙 LLaMA via Ollama
💻 HTML/CSS/JS
🐍 Python + FastAPI
🌐 NGINX
Its still early and a fun side project, but if you are into local model deployment, or just want to see how it works, check it out on the given link!
https://github.com/Imran-ml/llama-chatbot-dockerized
#LLM #Docker #OpenSource #Chatbot #LLaMA #fastapi