Welcome to Day 5 of the 67 AI Lab 10-Day Challenge!

So far, we’ve given our agent a brain using cloud giants like OpenAI and Google Gemini. These models are powerful, but they come with trade-offs: latency, cost, and most importantly, privacy. Every prompt you send leaves your network.

Today, we’re cutting the cord. We are going to run a Large Language Model (LLM) directly on your local machine (or the Raspberry Pi 5 we set up on Day 1) using Ollama.

Why Go Local?

  1. Privacy: Your data never leaves your hardware. This is critical for financial data, personal journals, or proprietary code.
  2. Cost: Local inference is free (minus electricity).
  3. Reliability: No internet? No problem. Your agent keeps working.
  4. Speed: On decent hardware, response times can be instantaneous compared to network round-trips.

Step 1: Installing Ollama

Ollama has become the standard for running local models easily.

If you are on Linux (like our Pi 5) or macOS, installation is a one-liner:

curl -fsSL https://ollama.com/install.sh | sh

Once installed, start the service:

ollama serve

Step 2: Pulling Your First Model

For the Raspberry Pi 5 (8GB), we need efficient models. Meta’s Llama 3 8B or Mistral’s Mistral 7B are excellent choices. They fit comfortably in the RAM while leaving room for the system.

Open a terminal and pull the model:

ollama pull llama3

(Or ollama pull mistral if you prefer).

Verify it’s running:

ollama run llama3 "Hello, are you running locally?"

If it replies, you have a local brain!

Step 3: Connecting OpenClaw to Ollama

OpenClaw supports Ollama out of the box, often via the OpenAI-compatible endpoint that Ollama exposes on port 11434.

In your OpenClaw workspace, locate your model configuration (usually in config.toml or providers.json depending on your version).

Add a new provider entry:

{
  "name": "ollama-local",
  "type": "openai",
  "baseUrl": "http://localhost:11434/v1",
  "apiKey": "ollama", 
  "models": ["llama3", "mistral"]
}

Note: The API key can be any string for Ollama, but it must not be empty.

Step 4: Switching the Agent

Now, tell your agent to use the local model. You can often do this dynamically in your session or by setting the default model in your environment.

# Example environment variable override
export OPENCLAW_MODEL="ollama-local/llama3"

Restart your OpenClaw agent.

Step 5: The “Disconnect” Test

The ultimate test:

  1. Disconnect your internet cable (or turn off Wi-Fi).
  2. Ask your agent: “Draft a confidential email about Project X.”
  3. Watch it stream the response.

If it works, you have achieved total data sovereignty.

Next Up

Now that our agent is secure and private, we need to make it useful. Tomorrow, on Day 6, we’ll build our first proper Skill: integrating with Memos to give our agent long-term memory and note-taking superpowers.

See you in the lab!