Initializing Python runtime...
Tip: Aegis evaluates policies in under 1ms
AI agent governance in your browser. No install needed.
One function activates everything: guardrails, policy enforcement, auto-patching, audit logging, and cost tracking. Drop an aegis.yaml in your project root and call aegis.auto_instrument().
guardrails:
pii:
action: mask # mask | block | warn | log
categories:
- email
- credit_card
- ssn
- korean_rrn
- api_key
injection:
action: block # block | warn | log
sensitivity: medium # low | medium | high
integrations:
patch_openai: true # auto-patch OpenAI client
patch_anthropic: true # auto-patch Anthropic client
audit:
backend: sqlite # sqlite | redis | postgres
# Option A: Two lines of Python
import aegis
aegis.auto_instrument()
# Option B: Zero code changes — just an env var
$ AEGIS_INSTRUMENT=1 python my_agent.py
# Fine-grained control
aegis.auto_instrument(
frameworks=["langchain", "openai_agents"], # specific frameworks only
on_block="warn", # "raise" (default) | "warn" | "log"
)
| Guardrail | Default | Catches |
|---|---|---|
| Prompt injection | Block | 85+ patterns, multi-language (EN/KO/ZH/JA) |
| PII detection | Warn | 12 categories (email, credit card, SSN, API keys…) |
| Prompt leak | Warn | System prompt extraction attempts |
| Toxicity | Warn | Harmful/abusive content (opt-in to block) |
Your code Aegis layer (invisible)
───────── ───────────────────────
chain.invoke("Hello") ──▶ [input guardrails] ──▶ LangChain ──▶ [output guardrails] ──▶ response
Runner.run(agent, "query") ──▶ [input guardrails] ──▶ OpenAI SDK ──▶ [output guardrails] ──▶ response
crew.kickoff() ──▶ [task guardrails] ──▶ CrewAI ──▶ [tool guardrails] ──▶ response
Define rules in YAML: which actions are auto-approved, need human review, or are blocked.
Send agent actions (navigate, read, write, delete) through the policy engine.
Instantly see risk level, approval decision, matched rule, and full audit trail.
# 50+ lines of DIY governance... per action type
if action.type == "delete":
if action.risk > THRESHOLD:
logger.warning(f"High-risk: {action}")
if not await ask_human_approval(action):
raise PermissionError("Denied")
# No audit trail
# No policy hot-reload
# Breaks when you add a new action type
result = await executor.run(action)
Scan MCP tool definitions for poisoning patterns. Ported from Aegis ToolDescriptionScanner with 10 regex detection patterns and Unicode normalization.
Simulate LLM cost tracking with budget limits and threshold transitions. Ported from Aegis CostTracker with real model pricing data.
Interactive hash-chain audit log using Web Crypto SHA-256. Ported from Aegis CryptoAuditChain for tamper-evident logging.
Map Aegis features to regulatory requirements across 5 frameworks. Ported from Aegis ComplianceMapper.
Real-time PII detection and masking. Ported from Aegis PIIGuardrail with Luhn validation for credit cards, Korean RRN/phone patterns, and API key detection.
Real-time prompt injection detection across 8 categories. Ported from Aegis InjectionGuardrail with multi-language support (Korean, Chinese, Japanese) and configurable sensitivity.
See the streaming guardrail problem in real-time. Left: LLM streams freely and leaks PII. Right: Aegis catches it. Same response, different outcome. Based on a real LangChain issue.
aegis.auto_instrument() adds full security in 1 function call — no behavior changes, just safety.
agent.run(action) # 💥 anything goes
aegis.auto_instrument() # 🛡️ everything governed
aegis.auto_instrument() once at startupauto / approve / blockDefine rules for each action type: auto-approve safe reads, require human review for writes, block dangerous operations.
Your AI agent (LangChain, CrewAI, OpenAI, etc.) sends each action through Aegis before executing it.
Aegis runs 4 guardrail scans + risk eval in 2.65ms (0.5% of a typical LLM call). Auto-approve, review, or block — every decision audit-logged.
Every action is evaluated, logged, and auditable — zero blind spots.
$ pip install agent-aegisversion: "1"
rules:
- name: read_auto
approval: auto
- name: write_review
approval: approve
- name: delete_block
approval: blockimport aegis
aegis.auto_instrument() # auto-patches all frameworks, activates everything
# OpenAI/Anthropic calls are now governed.
# PII masked, injections blocked, all audited.$ docker run -p 8000:8000 \
-v ./policy.yaml:/app/policy.yaml \
ghcr.io/acacian/aegis:latest
# REST API at http://localhost:8000See it in action — no install required
Click any scenario to load its policy and test actions
The difference between hoping your AI agent behaves and knowing it does
| Without Governance | With Aegis | |
|---|---|---|
| Policy changes | Redeploy code | Edit YAML, hot-reload |
| Risk evaluation | Manual if/else chains | 2.65ms with guardrails, declarative rules |
| Audit trail | Build your own logging | Built-in, compliance-ready |
| Human approval | Custom workflow code | One-line approval handler |
| Framework support | Build per framework | 7 adapters, one policy |
| Setup time | Days to weeks | 5 minutes |
Add governance to any Python AI agent in 5 minutes. One pip install, one YAML file.