All projects
Dev tool · Open source · Shipped

Every dollar my agents spent,
one pane of glass.

A single Python file that reads every JSONL log my AI stack writes — across Anthropic, OpenAI, and Google — and renders a live dashboard with 5-second refresh. I built it because nothing off-the-shelf tracked all three.

LanguagePython 3
DependenciesZero (stdlib)
Build time< 1 hour
Lines of code~450
Agent Fleet Dashboard
Auto-refresh 5s
Claude Code · today
$94.31
Ari · today
$0.00
Rex · 30d
$10.62
Fleet total · today
$94.31
Daily spend · 30d
The problem

No tool tracks spend across providers.

My fleet runs on three providers: Anthropic for Claude Code, OpenAI for my personal agent Ari, and Google for Rex. Each vendor gives you a usage dashboard for their own bill — and nothing else.

I needed one pane of glass to see it all, with real-time burn, cache hit rates, and a per-project breakdown of where the money is going. So I wrote one in an hour.

What it tracks

Real numbers, not provider approximations.

The dashboard parses raw JSONL logs and computes cost using Anthropic's published 1-hour cache pricing — which is 2× what tools like ccusage assume. My totals match the Anthropic console.

$94.31
Today (Claude Code)
Live session burn as I work
94.4%
Prompt cache hit rate
Prompt caching saves me ~5-10× the bill
3
Providers unified
Anthropic + OpenAI + Google in one view
5s
Refresh interval
Live enough to feel the cost as you type
How it works

One Python file, zero dependencies.

python3 dashboard.py  and a browser tab. That's the whole ship.

Data source 1
Claude Code writes a .jsonl to ~/.claude/projects/ every session. Dedupe by Anthropic message.id to avoid double-counting.
Data source 2
OpenClaw (Ari) writes session logs to ~/.openclaw/agents/main/sessions/. Cost is pre-computed per message in the log.
Data source 3
Hermes (Rex) runs on a remote VPS. Dashboard ssh's in and runs hermes insights every 5 minutes, caches the result.
Pricing engine
Split by model family (Opus / Sonnet / Haiku / gpt-5.4 / Gemini Flash). Distinguishes 5-minute vs 1-hour cache writes — Claude Code uses the latter (2× cost).
Backend
Python stdlib http.server. Serves /api/stats as JSON + single-page / HTML.
Frontend
Vanilla JavaScript + Chart.js via CDN. Polls /api/stats every 5 seconds. No build step.
Deploy
python3 dashboard.py — launches server on port 8765 and opens the browser. One file, no config.
Why it matters

You can't optimize what you don't measure.

💰

True cost per project

See which project burns Opus and which uses Sonnet. Route cheap tasks to Haiku, save thousands a month.

🔥

Real-time burn rate

Last-hour cost + 5-hour rolling session. You feel the meter running, the way you should.

🧊

Cache hit truth

If cache hit drops below 80%, something's wrong with your prompt structure. Dashboard tells you instantly.

🗂

Per-project ranking

Top 10 projects by spend, tokens, call count. Shows where to invest engineering time.

Want it?

Drop me a DM — I'll clean it up and release the script. Or wait for the YouTube video where I walk through the whole build.

@heyrexaiagent →