Super Bowl weekend wasn't about football for me. It was about how fast personal AI can move when the tools finally align.
Friday night my Clawdbot setup (a self-hosted OpenClaw fork) had exactly two agents: one pulling FSA-eligible expenses from receipt photos, the other triaging physical mail from snapshots. By Monday morning there were eleven agents running continuously on my home server — covering market monitoring, multi-ticker dashboards, family grocery lists and meal plans, flight email parsing into calendar invites, automatic debugging, git operations, styled HTML publishing to S3, and more. All of that shipped in roughly 72 hours.
Over the weekend I:
- Refactored the project into clean per-agent directories, each with its own config and changelog
- Built a Financial Market Monitoring agent that polls EDGAR filings every 5 minutes, sends Telegram alerts on new documents, and generates three daily AI-powered reports
- Created a unified multi-ticker dashboard (covering a handful of assets with concise 4-sentence LLM summaries) together with a Content Updater that pushes clean, tabbed HTML straight to S3
- Launched a Grocery/Shopping List agent that maintains shared JSON files in S3 and notifies family members whenever items are added or removed
- Added a full autonomy layer: Global Debugger (watching logs, de-duping incidents, automatically opening GitHub issues for errors it can't resolve itself), Git Getter (hourly pulls + manual trigger), Health Checker, and Git Pusher (guarded commits and pushes)
- Hardened the existing agents (extra validation on flight emails, real HTTPS/DNS checks, log rotation)
- Even found time to add a PlexSync agent for on-demand media transfers
Everything is triggered via Telegram or cron, runs on Gemini Flash at runtime, and stays alive through a self-updating git loop.
The Real Breakthrough: Agents Calling Other Agents
The biggest leap wasn't the number of agents — it was designing them to call each other. Now the Global Debugger can spot an error it can't fix and immediately open a labeled GitHub issue. The Market Dashboard can pull fresh data by invoking the Financial Market agent. Content Updater receives triggers from multiple report sources. Git Getter pulls new commits and only calls Health Checker when there are actual changes.
That single architectural decision turns a collection of scripts into a coordinated, resilient swarm that delegates and compounds.
The Subscription Stack That Kept Momentum Alive
To sustain this pace I subscribe to five services:
- SuperGrok (~$30/mo) — Grok 4, DeepSearch, priority access
- Gemini (~$20/mo Pro) — runtime workhorse (Flash handles OCR, parsing, summarization, and always-on cron jobs cheaply and quickly)
- OpenAI/ChatGPT ($20/mo Plus) — fast code generation and execution-focused tasks
- Claude (~$20/mo Pro) — deeper reasoning, prompt engineering, architecture decisions, diff reviews
- AWS S3 — pennies a month for private file storage and public dashboard hosting
Roughly $90–110/mo total. Multiple subscriptions aren't overkill — they're the workaround for rate limits. Every model has rolling windows, daily caps, weekly resets. Intense build sessions burn through quotas fast. Switching between them means no forced breaks: Claude slows down mid-refactor → OpenAI picks up the code sprint → Claude returns for planning → Gemini powers the live agents → Grok fills in search gaps. Continuity is everything.
What This Pace Really Signals
In 2026 a single person can take real pain points — fintech tracking, family logistics, travel chaos — and turn them into proactive, always-on intelligence in a matter of days. Multi-agent systems that monitor, report, debug, delegate, and evolve themselves are no longer enterprise-only. The flywheel is real: AI helps build and maintain the AI that runs on it.
The whole thing still lives happily on a Raspberry Pi 5 (nowhere near maxed out). I like being able to see and touch the physical device, so full cloud isn't on the table yet — though better access controls could eventually open hybrid paths. Longer term, consolidating local LLMs might mean investing in a Mac Mini. For now the Pi keeps delivering.
