On Day 3 of our journey building the ultimate AI Lab on a Raspberry Pi, we’re giving our OpenClaw agent a serious upgrade: Deep Search capabilities.
While standard LLMs are great at reasoning, they often hallucinate facts or rely on outdated training data. To build a true “Researcher” agent, we need real-time, cited, and accurate information from the web.
Enter Perplexity AI.
Why Perplexity?
Perplexity isn’t just a search wrapper; it’s an answer engine. Unlike a standard Google Search API which returns a list of links, Perplexity’s API (specifically the sonar models) returns synthesized answers with citations. This is perfect for an autonomous agent because it reduces the cognitive load of parsing raw HTML and synthesizing multiple sources—the API does the heavy lifting.
The Setup
We’ll be integrating the perplexity provider into our OpenClaw skills configuration.
Prerequisites
- Perplexity API Key: You’ll need credits in your Perplexity account.
- OpenClaw Instance: Running on your Raspberry Pi (setup in Day 1).
Configuration
Navigate to your OpenClaw skills directory and create a new skill or update your web_search tool configuration. If you are using the standard OpenClaw provider system, it looks something like this in your config.yaml or environment variables:
export PERPLEXITY_API_KEY="pplx-xxxxxxxxxxxxxxxx"
In your OpenClaw tool definition, we swap the standard search for Perplexity’s sonar model:
# tools/researcher.yaml
name: researcher
description: "Deep search capabilities for complex queries"
model: perplexity/sonar-reasoning-pro
parameters:
temperature: 0.1
The “Researcher” Persona
We don’t just want a tool; we want a behavior. We define a sub-agent or “persona” specifically for research tasks.
System Prompt:
You are The Researcher. Your goal is accuracy above all else. When asked a question, you do not guess. You use your search tools to find citations. You synthesize multiple sources to provide a comprehensive answer. You always link your sources.
Testing the Agent
Let’s test it with a query that requires up-to-date knowledge:
“What were the major breakthroughs in solid-state battery technology in late 2025?”
With a standard model (cutoff 2024), you’d get a hallucination or a “I don’t know.” With The Researcher powered by Perplexity, the agent queries the live web, finds recent papers and news articles from late 2025/early 2026, and constructs a summary:
…In late 2025, QuantumScape announced their new ceramic separator achieving 2000 cycles with 95% retention… [Source 1]
Conclusion
By offloading the “search and synthesize” loop to Perplexity, we free up our local or primary LLM (like Claude or Gemini) to focus on higher-level reasoning and task management. Our Raspberry Pi 5 doesn’t need to index the web; it just needs to know who to ask.
Tomorrow, we’ll look at giving our agent “Eyes” with Vision capabilities. Stay tuned.