Token Meter

Your meter for
Claude Code & Codex.

Local-first dashboard that shows how many tokens you spent, where they went, and how fast each model and MCP tool actually runs. Your JSONL never leaves your machine.

Free + MIT licensed core, available now. Pro ($5) and Pro+ ($24) ship later β€” no charge today.

npm install -g @whdrnr2583/token-meter
token-meter ingest && token-meter serve

Why Token Meter

πŸ“Š Multi-vendor in one view

Claude Code ~/.claude/projects/ and Codex ~/.codex/sessions/ merged. No SDK, no proxy.

πŸ” MCP & tool breakdown

See exactly which MCP server eats your tokens and which tool is slow. Latency, response size, call count.

πŸ’΅ USD-equivalent cost

Your Max plan is a flat fee β€” but what would the API have cost? Token Meter shows the saving (or burn). Estimates only β€” not your actual invoice.

🏠 Local-first, private

SQLite in ~/.tokenpulse/ (renamed to ~/.tokenmeter/ in a future release with an automatic migration). No upload, no cloud account required for the free tier.

⚑ MIT licensed core

The CLI and dashboard are open source. Fork it, audit it, ship it.

πŸ•’ Time-of-day insight

Find your most expensive hours, your slowest periods, and the days that blew up the budget.

πŸ”” Smart alerts

Threshold rules with desktop, email, and webhook actions. Pipe into Slack, Discord, n8n β€” your call.

Connect to your AI tool

Token Meter ships an MCP server β€” once registered, your AI tool can ask "this week's spend", "recent sessions to resume", "why was session X expensive" without you opening the dashboard.

πŸ€– Easiest: have your LLM set it up

Open Claude Code, Cursor, or Claude Desktop and paste:

Read https://raw.githubusercontent.com/whdrnr2583-cmd/tokenmeter/main/docs/mcp-server.md and set up Token Meter as my MCP server.

The doc is structured so the LLM picks the right block for your OS and client, runs the registration command, and verifies it.

Or set it up yourself (30 seconds)

Claude Code

claude mcp add token-meter -- \
  npx -y @whdrnr2583/token-meter mcp

Verify with claude mcp list.

Cursor

Edit ~/.cursor/mcp.json (Windows: %USERPROFILE%\.cursor\mcp.json):

{
  "mcpServers": {
    "token-meter": {
      "command": "npx",
      "args": ["-y", "@whdrnr2583/token-meter", "mcp"]
    }
  }
}

Restart Cursor β†’ Settings β†’ MCP shows green.

Claude Desktop

Edit claude_desktop_config.json with the same JSON as Cursor.

Path: ~/Library/Application Support/Claude/ (mac), %APPDATA%\Claude\ (Windows). Fully quit and reopen.

ChatGPT / Other

ChatGPT requires HTTP β€” wrap with mcpo (community recipe). Direct stdio works with Continue, Zed, any MCP client:

npx -y @whdrnr2583/token-meter mcp

Full guide with verification + troubleshooting: docs/mcp-server.md

Pricing

Free

$0

  • Claude Code + Codex parsing
  • MCP & tool breakdown
  • Project / model / hour breakdown
  • 7-day history
  • USD-equivalent cost
  • 1 desktop alert rule

Pro+ later

~$24/mo

  • Unlimited history
  • Local LLM proxy (Ollama, LM Studio, llama.cpp, vLLM)
  • GPU / VRAM tracking β€” millisecond TTFT/ITL
  • Behavior-changing automation (MCP auto-trim, model switching, cloud↔local routing)
  • Model benchmark lab
  • Multi-machine sync (encrypted)
  • PDF reports (auto-emailed)

Substantial engineering β€” ships when the proxy + per-OS GPU work is scheduled in. Not before Pro $5 is live. Full Pro+ spec β†’