File size: 4,317 Bytes
e8ebd72
1
{"nbformat":4,"nbformat_minor":0,"metadata":{"colab":{"provenance":[{"file_id":"https://github.com/5aharsh/collama/blob/main/Ollama_Setup.ipynb","timestamp":1754040362502}],"gpuType":"T4"},"kernelspec":{"name":"python3","display_name":"Python 3"},"language_info":{"name":"python"},"accelerator":"GPU"},"cells":[{"cell_type":"markdown","source":["# Run Ollama in Colab\n","---\n","\n","[![5aharsh/collama](https://raw.githubusercontent.com/5aharsh/collama/main/assets/banner.png)](https://github.com/5aharsh/collama)\n","\n","This is an example notebook which demonstrates how to run Ollama inside a Colab instance. With this you can run pretty much any small to medium sized models offerred by Ollama for free.\n","\n","For the list of available models check [models being offerred by Ollama](https://ollama.com/library).\n","\n","\n","## Before you proceed\n","---\n","\n","Since by default the runtime type of Colab instance is CPU based, in order to use LLM models make sure to change your runtime type to T4 GPU (or better if you're a paid Colab user). This can be done by going to **Runtime > Change runtime type**.\n","\n","While running your script be mindful of the resources you're using. This can be tracked at **Runtime > View resources**.\n","\n","## Running the notebook\n","---\n","\n","After configuring the runtime just run it with **Runtime > Run all**. And you can start tinkering around. This example uses [Llama 3.2](https://ollama.com/library/llama3.2) to generate a response from a prompted question using [LangChain Ollama Integration](https://python.langchain.com/docs/integrations/chat/ollama/)."],"metadata":{"id":"zyGk-87qnbWE"}},{"cell_type":"markdown","source":["## Installing Dependencies\n","---\n","\n","1. `pciutils` is required by Ollama to detect the GPU type.\n","2. Installation of Ollama in the runtime instance will be taken care by `curl -fsSL https://ollama.com/install.sh | sh`\n","\n","\n"],"metadata":{"id":"B1S1YL6EnYBB"}},{"cell_type":"code","source":["!sudo apt update\n","!sudo apt install -y pciutils\n","!curl -fsSL https://ollama.com/install.sh | sh"],"metadata":{"id":"YlVK9iG4AD5L"},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":["## Running Ollama\n","---\n","\n","In order to use Ollama it needs to run as a service in background parallel to your scripts. Becasue Jupyter Notebooks is built to run code blocks in sequence this make it difficult to run two blocks at the same time. As a workaround we will create a service using subprocess in Python so it doesn't block any cell from running.\n","\n","Service can be started by command `ollama serve`.\n","\n","`time.sleep(5)` adds some delay to get the Ollama service up before downloading the model."],"metadata":{"id":"fGEJwjTPoKWH"}},{"cell_type":"code","source":["import threading\n","import subprocess\n","import time\n","\n","def run_ollama_serve():\n","  subprocess.Popen([\"ollama\", \"serve\"])\n","\n","thread = threading.Thread(target=run_ollama_serve)\n","thread.start()\n","time.sleep(5)"],"metadata":{"id":"Jh5CBAFxBYAC"},"execution_count":null,"outputs":[]},{"cell_type":"markdown","source":["## Pulling Model\n","---\n","\n","Download the LLM model using `ollama pull llama3.2`.\n","\n","For other models check https://ollama.com/library"],"metadata":{"id":"WcBLqZfyoHg4"}},{"cell_type":"code","source":["\n","model_url = 'goonsai/qwen2.5-3B-goonsai-nsfw-100k' # @param {type:'string'}\n","!ollama pull {model_url}\n","\n"],"metadata":{"id":"o2ghppmRDFny"},"execution_count":null,"outputs":[]},{"cell_type":"code","source":["!pip install langchain-ollama"],"metadata":{"id":"MbrT39oil6tK"},"execution_count":null,"outputs":[]},{"cell_type":"code","source":["from langchain_core.prompts import ChatPromptTemplate\n","from langchain_ollama.llms import OllamaLLM\n","from IPython.display import Markdown\n","\n","\n","prompt_input = 'egyptian_mythology' # @param {type:'string'}\n","template = \"\"\"Question: {question}\n","\n","Answer: Let's think step by step.\"\"\"\n","\n","prompt = ChatPromptTemplate.from_template(template)\n","\n","model = OllamaLLM(model=\"goonsai/qwen2.5-3B-goonsai-nsfw-100k\")\n","\n","chain = prompt | model\n","\n","display(chain.invoke({'question': f'{prompt_input}'}))"],"metadata":{"id":"mUrk_3pL9LX7"},"execution_count":null,"outputs":[]}]}