Metadata-Version: 2.4
Name: pydantic-ai-toolguard
Version: 0.1.0
Summary: Deny-first tool authorization for pydantic-ai agents — by AgentsID
Project-URL: Homepage, https://github.com/stevenkozeniesky02/pydantic-ai-toolguard
Project-URL: Documentation, https://agentsid.dev/docs/pydantic-ai
Project-URL: Repository, https://github.com/stevenkozeniesky02/pydantic-ai-toolguard
License: MIT
Keywords: agents,agentsid,authorization,mcp,pydantic-ai,security
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Security
Classifier: Topic :: Software Development :: Libraries
Classifier: Typing :: Typed
Requires-Python: >=3.10
Requires-Dist: pydantic-ai-slim>=1.74.0
Requires-Dist: pydantic>=2.0
Provides-Extra: dev
Requires-Dist: coverage>=7.0; extra == 'dev'
Requires-Dist: pyright>=1.1; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.23; extra == 'dev'
Requires-Dist: pytest>=9.0; extra == 'dev'
Requires-Dist: ruff>=0.8; extra == 'dev'
Provides-Extra: remote
Requires-Dist: agentsid>=0.1.0; extra == 'remote'
Description-Content-Type: text/markdown

# pydantic-ai-toolguard

Deny-first tool authorization for [pydantic-ai](https://ai.pydantic.dev) agents — by [AgentsID](https://agentsid.dev).

Implements the [AgentsID Permission Specification](https://github.com/stevenkozeniesky02/permission-spec): glob-based tool patterns, parameter conditions, schedule windows, rate limiting, approval workflows, and an append-only audit log.

## Install

```bash
pip install pydantic-ai-toolguard
```

## Quick Start

```python
from pydantic_ai import Agent
from pydantic_ai_toolguard import ToolGuard, PermissionRule

guard = ToolGuard(rules=[
    PermissionRule(tool_pattern="delete_*", action="deny"),
    PermissionRule(tool_pattern="*", action="allow"),
])

agent = Agent("openai:gpt-5.2", capabilities=[guard])
```

The guard hides denied tools from the model and blocks execution if a denied tool is somehow called. Every decision is recorded in the audit log.

## How It Works

Rules are evaluated deny-first:

1. **DENY rules** checked first — first matching deny = blocked
2. **ALLOW rules** checked second — first matching allow = permitted (subject to rate limits, schedule, approval)
3. **Default DENY** — no matching rule = blocked

This integrates via three pydantic-ai capability hooks:

- `prepare_tools` — filters denied tools out of the model's view
- `before_tool_execute` — evaluates permissions before execution
- `after_tool_execute` — records the result

## Features

### Glob Pattern Matching

```python
PermissionRule(tool_pattern="*", action="allow")         # all tools
PermissionRule(tool_pattern="db_*", action="deny")        # prefix match
PermissionRule(tool_pattern="*_readonly", action="allow")  # suffix match
```

### Parameter Conditions

```python
PermissionRule(
    tool_pattern="query_db",
    action="deny",
    conditions={"env": "production"},  # only deny production queries
)
```

### Schedule Windows

```python
from pydantic_ai_toolguard import ScheduleConfig

PermissionRule(
    tool_pattern="deploy_*",
    action="allow",
    schedule=ScheduleConfig(
        hours_start=9, hours_end=17,
        timezone="US/Pacific",
        days=("mon", "tue", "wed", "thu", "fri"),
    ),
)
```

### Rate Limiting

```python
from pydantic_ai_toolguard import RateLimitConfig

PermissionRule(
    tool_pattern="search_*",
    action="allow",
    rate_limit=RateLimitConfig(max=10, per="minute"),
)
```

### Approval Workflows

```python
async def ask_user(tool: str, rule: PermissionRule) -> bool:
    return input(f"Allow {tool}? (y/n) ") == "y"

guard = ToolGuard(
    rules=[PermissionRule(tool_pattern="transfer_*", action="allow", requires_approval=True)],
    on_approval=ask_user,
)
```

### Audit Log

```python
guard = ToolGuard(rules=[...])

# After agent runs...
for entry in guard.audit_log.query(decision="denied"):
    print(f"{entry.timestamp} — {entry.tool}: {entry.reason}")

# Export as JSON
print(guard.audit_log.export_json())
```

## Configuration

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `rules` | `Sequence[PermissionRule]` | required | Permission rules |
| `on_approval` | `async (str, PermissionRule) -> bool` | `None` | Approval callback |
| `hide_denied` | `bool` | `True` | Remove denied tools from model view |
| `log_decisions` | `bool` | `True` | Record to audit log |
| `deny_message` | `str` | `"Permission denied: {reason}"` | Message returned to model on deny |

## Links

- [AgentsID](https://agentsid.dev) — Identity and auth for AI agents
- [Permission Specification](https://github.com/stevenkozeniesky02/permission-spec)
- [AgentsID Scanner](https://github.com/stevenkozeniesky02/agentsid-scanner) — Security scanner for MCP servers
- [pydantic-ai Capabilities](https://ai.pydantic.dev/capabilities/)

## License

MIT
