There’s a new entry in the personal AI assistant space that’s worth paying attention to — not because of breathless marketing, but because it solves a real problem: running a capable AI agent on hardware most projects wouldn’t even consider.
PicoClaw (github.com/sipeed/picoclaw) is an open-source, Go-based AI assistant from Sipeed that launched on February 9, 2026, hit 5,000 GitHub stars in 4 days, and is already accumulating a healthy pile of pull requests. Let’s look past the momentum and talk about what it actually is, what it does well, and where it falls short.
The Problem It Solves
If you’ve tried running OpenClaw (formerly ClawdBot) on a Raspberry Pi or a RISC-V SBC, you know the pain. OpenClaw is a TypeScript monolith — great for a desktop or a VPS, but 100MB+ RAM and a 30-second startup time are non-starters for embedded Linux. The Python-based nanobot project took a crack at this (~4,000 lines of Python, much lighter), but Go unlocks another level of efficiency.
PicoClaw’s headline numbers are honest and verifiable:
| OpenClaw | Nanobot | PicoClaw | |
|---|---|---|---|
| Language | TypeScript | Python | Go |
| Memory | 100MB+ | ~30MB | | Startup | ~30s | ~5s | **| Architectures | x86 | x86/ARM | **x86/ARM/RISC-V |
| Binary size | Large | N/A | Single self-contained binary |
Running on a $10 Sipeed LicheeRV-Nano (SOPHGO SG2002 RISC-V SoC, 256MB DDR3) isn’t a stunt — it’s a practical deployment target, and PicoClaw is genuinely designed for it.
What PicoClaw Actually Is
Let’s be precise: PicoClaw is a thin local agent that orchestrates calls to external LLM APIs. It is not a local model runtime. The “intelligence” comes from whatever provider you configure (OpenRouter, Zhipu, etc.). The device running PicoClaw handles:
- Chat platform integration (Telegram, Discord, QQ, DingTalk, LINE)
- Session and long-term memory management
- Scheduled job execution (cron)
- Routing your messages to the LLM and back
This is the right architecture for constrained hardware. Trying to run inference locally on a $10 board is a dead end — offloading compute to an API while keeping orchestration local is sensible engineering.
Setup in Practice
Installation is straightforward. You have three options:
Option 1 — Prebuilt binary (fastest):
# Download for your arch from the GitHub Releases page
# Available: riscv64-linux, arm64-linux, amd64-linux, amd64-windows
chmod +x picoclaw
./picoclaw init
Option 2 — Build from source:
git clone https://github.com/sipeed/picoclaw.git
cd picoclaw
make deps
make build # current platform
make build-all # all platforms
make install
Option 3 — Docker Compose (no local install needed):
git clone https://github.com/sipeed/picoclaw.git
cd picoclaw
cp config/config.example.json config/config.json
vim config/config.json # set tokens and API keys
docker compose --profile gateway up -d
docker compose logs -f picoclaw-gateway
After picoclaw init, configure ~/.picoclaw/config.json. The minimum you need is an LLM API key and, optionally, a Brave Search API key for web search. Then attach it to a chat platform — Telegram is the easiest:
{
"channels": {
"telegram": {
"enabled": true,
"token": "YOUR_BOT_TOKEN",
"allowFrom": ["YOUR_TELEGRAM_USER_ID"]
}
}
}
Get your user ID from @userinfobot on Telegram, set your bot token from @BotFather, run picoclaw gateway, and you’re talking to your assistant inside a minute. This is genuinely fast to get running.
Features Worth Knowing About
Messaging platforms: Telegram, Discord, QQ, DingTalk, and LINE are all supported out of the box via the gateway command. For most western users, Telegram is the obvious pick.
Cron and scheduling: This is more capable than it sounds. You can issue natural language commands (“remind me every Monday at 9am to review pull requests”) and PicoClaw will parse and schedule them. It also accepts raw cron expressions for precise control. Scheduled jobs persist across restarts.
Persistent memory: PicoClaw maintains long-term memory across sessions, stored locally. Useful if you want your assistant to remember your preferences or ongoing project context.
Custom skills: The workspace is structured with dedicated directories for sessions, memory, scheduled jobs, and custom skills. Extending behavior means dropping code into the skills directory — the architecture is deliberately hackable.
LLM provider agnostic: OpenRouter support means you can point it at any model — Claude, GPT-4o, Llama, Gemini — without changing anything else. Useful if you want to experiment or cut costs.
The AI-Generated Codebase: A Real Consideration
Sipeed claims ~95% of PicoClaw’s core code was written by an AI agent via a “self-bootstrapping” process. This is either a fascinating experiment in AI-driven development or a red flag for long-term maintainability — probably both.
In practice, the Go codebase is reportedly clean and readable. Early contributors confirm it’s easy to navigate. But if you’re planning to extend it seriously, do your own audit before building production workflows on it. The project is very young (days old at time of writing), and the PR queue is already busy, which is a good sign for community health.
Honest Limitations
- No local inference. You need an internet connection and API credits. PicoClaw is a client, not an engine.
- Experimental maturity. Launched February 9, 2026. It works, but expect rough edges and breaking changes in the short term.
- Limited autonomy vs. OpenClaw. OpenClaw’s full agent mode handles complex multi-step workflows (email, calendar, flight check-in). PicoClaw is more constrained, by design. It’s an assistant, not an autonomous agent.
- No GUI. Interaction is entirely through CLI or chat apps. That’s a feature for some, a limitation for others.
Who Should Try It
PicoClaw is a good fit if you:
- Own a RISC-V or ARM SBC gathering dust and want a genuinely useful project for it
- Want a self-hosted chat assistant without a cloud VM
- Are experimenting with Go-based AI tooling and want a clean, readable reference
- Need a low-power, always-on assistant (a board sipping 0.5W beats a Pi running Python)
It’s probably not what you want if you need OpenClaw’s full autonomous workflow capabilities, or if you’re deploying on hardware where the 10MB vs 100MB distinction is irrelevant.
Bottom Line
PicoClaw does what it says. The numbers are real, the setup is fast, and the architecture is honest about what it offloads. It’s the right answer to a specific question: how do you run a capable AI assistant on hardware that costs less than a pizza?
It’s young, it has rough edges, and you shouldn’t run your business on it yet. But as a personal assistant running on a $10 board on your home network — pointing at whatever LLM you prefer, accessible from your phone via Telegram — it’s hard to argue with.
Check it out: github.com/sipeed/picoclaw