Everything you need to build powerful AI agent integrations without burning your context window.
MCP servers consume 3-30K tokens before you start. OneTool uses ~2K tokens no matter how many tool packs you load.
No context rot. No token bloat. 24x lower cost.
Write Python, not tool definitions. You see exactly what runs.
__ot brave.search(q="AI")
Wrap any existing MCP server. Configure in YAML. Call it explicitly—all the goodness of OneTool, all the power of your existing servers.
Learn more →Reusable code templates with Jinja2. Define once, invoke with $snippet_name.
Short names for tools. ws instead of brave.web_search.
p instead of pattern. Any unambiguous prefix works.
One config file. Three-tier inheritance: bundled → global → project.
All code validated before execution. Blocks exec, eval, subprocess. Warns on risky patterns. ~1ms overhead.
Four tiers: Allow, Ask, Warn, Block. Fine-grained fnmatch patterns.
File ops constrained to allowed directories. Symlink-safe.
Automatic protection against indirect prompt injection. External content wrapped in GUID boundaries with triggers redacted.
One file, one pack. No registration, no configuration. Drop a Python file, restart, call your functions.
Create tools →Tools with deps run in isolated subprocesses via PEP 723.
Build tools in separate repos. Share via tools_dir globs.
Real agent + MCP server testing. Define tasks in YAML, get objective metrics: tokens, costs, accuracy, timing.
Bench docs →Smoke, unit, integration tiers. Fast CI feedback.
Track calls, success rates, costs. HTML reports.