Yesterday, we installed OpenClaw on the Raspberry Pi. It was alive, but silent. Today, we give it a voice—and a brain.
A true agent isn’t just a script; it needs a Large Language Model (LLM) to reason, understand intent, and generate human-like responses. OpenClaw makes this incredibly easy by supporting multiple providers right out of the box.
In this guide, we’ll connect Google Gemini (for speed and reasoning) and OpenAI (as a backup or for specific tasks).
Prerequisites
You’ll need API keys for the providers you want to use:
Step 1: The Configuration Wizard
The easiest way to set up auth is the built-in wizard. SSH into your Pi and run:
openclaw configure
Select Auth Profiles. You’ll see a list of supported providers.
- Choose Google Gemini.
- Paste your API Key when prompted.
- Repeat for OpenAI if desired.
OpenClaw stores these securely in ~/.openclaw/auth-profiles.json.
Step 2: Setting the Default Model
Once authenticated, you need to tell OpenClaw which brain to use by default.
openclaw config set agents.defaults.model.primary "google-gemini-cli/gemini-1.5-pro"
Note: Replace gemini-1.5-pro with the latest model ID available to you.
Step 3: Verifying the Brain
Let’s test if it’s working. We’ll use the agent command to send a direct prompt to the configured model.
openclaw agent --message "Hello! Who are you and what are you running on?"
If everything is wired up, you should see a response like:
“I am an OpenClaw agent running on a Raspberry Pi…”
Under the Hood
When you send that message, OpenClaw:
- Loads your
USER.mdandSOUL.mdto understand its persona. - Constructs a prompt with your request.
- Sends it to the Gemini API.
- Streams the response back to your terminal (or chat interface).
Why Gemini on Pi?
I chose Gemini for this build because of its long context window and speed. When running on a low-power device like a Pi, offloading the heavy lifting to a fast cloud API keeps the system responsive.
Next Up
Now that our agent can think, it needs to learn. Tomorrow, in Day 3, we’ll integrate Perplexity to give it real-time access to the web, allowing it to research topics it wasn’t trained on.
Stay tuned!