Spaces:
Build error
Build error
File size: 4,424 Bytes
3382f47 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 |
# Running Ollama with AutoGPT
> **Important**: Ollama integration is only available when self-hosting the AutoGPT platform. It cannot be used with the cloud-hosted version.
Follow these steps to set up and run Ollama with the AutoGPT platform.
## Prerequisites
1. Make sure you have gone through and completed the [AutoGPT Setup](/platform/getting-started) steps, if not please do so before continuing with this guide.
2. Before starting, ensure you have [Ollama installed](https://ollama.com/download) on your machine.
## Setup Steps
### 1. Launch Ollama
To properly set up Ollama for network access, follow these steps:
1. **Set the host environment variable:**
**Windows (Command Prompt):**
```
set OLLAMA_HOST=0.0.0.0:11434
```
**Linux/macOS (Terminal):**
```bash
export OLLAMA_HOST=0.0.0.0:11434
```
2. Start the Ollama server:
```
ollama serve
```
3. **Open a new terminal/command window** and download your desired model:
```
ollama pull llama3.2
```
> **Note**: This will download the [llama3.2](https://ollama.com/library/llama3.2) model. Keep the terminal with `ollama serve` running in the background throughout your session.
### 2. Start the Backend
Open a new terminal and navigate to the autogpt_platform directory:
```bash
cd autogpt_platform
docker compose up -d --build
```
### 3. Start the Frontend
Open a new terminal and navigate to the frontend directory:
```bash
cd autogpt_platform/frontend
corepack enable
pnpm i
pnpm dev
```
Then visit [http://localhost:3000](http://localhost:3000) to see the frontend running, after registering an account/logging in, navigate to the build page at [http://localhost:3000/build](http://localhost:3000/build)
### 4. Using Ollama with AutoGPT
Now that both Ollama and the AutoGPT platform are running we can move onto using Ollama with AutoGPT:
1. Add an AI Text Generator block to your workspace (it can work with any AI LLM block but for this example will be using the AI Text Generator block):

2. In the "LLM Model" dropdown, select "llama3.2" (This is the model we downloaded earlier)

> **Compatible Models**: Not all models work with Ollama in AutoGPT. Here are the models that are confirmed to work:
> - `llama3.2`
> - `llama3`
> - `llama3.1:405b`
> - `dolphin-mistral:latest`
3. **Set your local IP address** in the "Ollama Host" field:
**To find your local IP address:**
**Windows (Command Prompt):**
```
ipconfig
```
**Linux/macOS (Terminal):**
```bash
ip addr show
```
or
```bash
ipconfig
```
Look for your IPv4 address (e.g., `192.168.0.39`), then enter it with port `11434` in the "Ollama Host" field:
```
192.168.0.39:11434
```

4. Now we need to add some prompts then save and then run the graph:

That's it! You've successfully setup the AutoGPT platform and made a LLM call to Ollama.

### Using Ollama on a Remote Server with AutoGPT
For running Ollama on a remote server, simply make sure the Ollama server is running and is accessible from other devices on your network/remotely through the port 11434.
**To find your local IP address of the system running Ollama:**
**Windows (Command Prompt):**
```
ipconfig
```
**Linux/macOS (Terminal):**
```bash
ip addr show
```
or
```bash
ipconfig
```
Look for your IPv4 address (e.g., `192.168.0.39`).
Then you can use the same steps above but you need to add the Ollama server's IP address to the "Ollama Host" field in the block settings like so:
```
192.168.0.39:11434
```

## Troubleshooting
If you encounter any issues, verify that:
- Ollama is properly installed and running
- All terminals remain open during operation
- Docker is running before starting the backend
For common errors:
1. **Connection Refused**: Make sure Ollama is running and the host address is correct (also make sure the port is correct, its default is 11434)
2. **Model Not Found**: Try running `ollama pull llama3.2` manually first
3. **Docker Issues**: Ensure Docker daemon is running with `docker ps`
|