100% on 96 real coding tasks. Plain Claude scores 78.3%. With Token Savior, the same model on the same tasks lands every single one — 25 wins, 65 ties, 0 losses. Active tokens drop 77%. Wall time drops 76%. No other MCP code-navigation server has cleared this benchmark.
Two agents, one codebase, ninety prompts pulled from real engineering work — find callers, audit cycles, estimate blast radius, summarize modules. We counted tokens, wall time and correctness.
get_file_dependencies() call got a perfect 2/2.Every task from the benchmark. Pick any base model on the left, any Token-Savior-wired model on the right. The numbers recompute live. Token Savior wins or ties on every single one. Click a row to jump the replay below.
| #↕ | task↕ | A↕ | B↕ | outcome↕ |
|---|
What you see below is the tsbench fixture parsed into its call graph: 206 symbols (functions, classes, methods, constants), 412 call edges connecting them, clustered by module. The plain agent walks this one grep at a time; Token Savior queries it directly. Drag to orbit, hover a point for details.
Every tile is a real task from the benchmark, colored by outcome. Click one and we'll replay both agents' actual traces side by side — real tools, real timings, real answers.
Token Savior speaks the standard Model Context Protocol. Drop it into any client, keep your tools, ship a sharper agent.
Token Savior is an open MCP server. Point it at your repo, wire it into Claude Code, Cursor, Codex, Antigravity, Cline, Continue, Windsurf, Aider, Gemini CLI — any MCP-compatible client. Ship a better agent this afternoon.