Metadata-Version: 2.4
Name: janus-guard
Version: 0.0.4
Summary: System-level security for LLM agents via fine-grained policy enforcement on tool calls.
Project-URL: Documentation, https://agentic-ai-risk-mitigation.github.io/Janus/
Project-URL: Repository, https://github.com/Agentic-AI-Risk-Mitigation/Janus
Project-URL: Issues, https://github.com/Agentic-AI-Risk-Mitigation/Janus/issues
License: MIT
License-File: LICENSE
Keywords: agents,llm,policy,security,tool-calling
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Security
Classifier: Topic :: Software Development :: Libraries
Requires-Python: >=3.11
Requires-Dist: authzed>=1.24.3
Requires-Dist: fastapi>=0.135.1
Requires-Dist: jinja2>=3.1
Requires-Dist: jsonschema>=4.0
Requires-Dist: openai<2.0,>=1.0
Requires-Dist: pydantic>=2.0
Requires-Dist: python-dotenv>=1.0
Requires-Dist: uvicorn[standard]>=0.41.0
Provides-Extra: adk
Requires-Dist: google-genai>=1.0; extra == 'adk'
Provides-Extra: all
Requires-Dist: anthropic>=0.25; extra == 'all'
Requires-Dist: boto3>=1.26; extra == 'all'
Requires-Dist: google-genai>=1.0; extra == 'all'
Requires-Dist: langchain-core>=0.3; extra == 'all'
Requires-Dist: langchain-openai>=0.2; extra == 'all'
Requires-Dist: langchain>=0.3; extra == 'all'
Provides-Extra: anthropic
Requires-Dist: anthropic>=0.25; extra == 'anthropic'
Provides-Extra: bedrock
Requires-Dist: boto3>=1.26; extra == 'bedrock'
Provides-Extra: dev
Requires-Dist: mypy>=1.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.21; extra == 'dev'
Requires-Dist: pytest>=7.0; extra == 'dev'
Requires-Dist: ruff>=0.4; extra == 'dev'
Provides-Extra: docs
Requires-Dist: mkdocs-material>=9.0; extra == 'docs'
Requires-Dist: mkdocs-mermaid2-plugin>=0.7.0; extra == 'docs'
Requires-Dist: mkdocs>=1.5; extra == 'docs'
Requires-Dist: pymdown-extensions>=10.0; extra == 'docs'
Provides-Extra: google
Requires-Dist: google-genai>=1.0; extra == 'google'
Provides-Extra: langchain
Requires-Dist: langchain-core>=0.3; extra == 'langchain'
Requires-Dist: langchain-openai>=0.2; extra == 'langchain'
Requires-Dist: langchain>=0.3; extra == 'langchain'
Description-Content-Type: text/markdown

# Janus

**System-level security for LLM agents via fine-grained policy enforcement on tool calls.**

Janus intercepts every tool call an LLM agent makes and validates it against a security policy before execution — following the principle of least privilege. Policies are defined in JSON (or auto-generated by an LLM) and validated at runtime using JSON Schema restrictions.

> **Status:** Alpha (v0.0.4) — APIs are stable but subject to change.

**[Documentation](https://agentic-ai-risk-mitigation.github.io/Janus/)** | **[GitHub](https://github.com/Agentic-AI-Risk-Mitigation/Janus)**

---

## Table of Contents

- [Janus](#janus)
  - [Table of Contents](#table-of-contents)
  - [Features](#features)
  - [Installation](#installation)
  - [Quick Start](#quick-start)
    - [`JanusAgent` Parameters](#janusagent-parameters)
    - [`JanusAgent` Methods](#janusagent-methods)
  - [Policy Format](#policy-format)
    - [Full Format](#full-format)
    - [Shorthand Format (conditions only)](#shorthand-format-conditions-only)
    - [Policy Rule Fields](#policy-rule-fields)
    - [Condition Schemas](#condition-schemas)
    - [Evaluation Logic](#evaluation-logic)
  - [LLM Providers](#llm-providers)
  - [Built-in Tools](#built-in-tools)
    - [File Tools](#file-tools)
    - [Command Tools](#command-tools)
  - [Custom Tools](#custom-tools)
    - [`ToolParam` Fields](#toolparam-fields)
  - [LLM-Generated Policies](#llm-generated-policies)
    - [Standalone Policy Generation](#standalone-policy-generation)
    - [Policy Refinement](#policy-refinement)
  - [Framework Adapters](#framework-adapters)
    - [LangChain](#langchain)
      - [Depth 1 — Convert `ToolDef` list to secured `StructuredTool` list](#depth-1--convert-tooldef-list-to-secured-structuredtool-list)
      - [Depth 2 — Wrap existing LangChain tools](#depth-2--wrap-existing-langchain-tools)
      - [Depth 3 — `JanusLangChainAgent` (turnkey)](#depth-3--januslangchainagent-turnkey)
    - [Google ADK (Gemini)](#google-adk-gemini)
      - [Depth 1 — Convert `ToolDef` list to ADK-native types](#depth-1--convert-tooldef-list-to-adk-native-types)
      - [Depth 2 — `JanusADKAgent` (turnkey)](#depth-2--janusadkagent-turnkey)
  - [Standalone Policy Enforcement](#standalone-policy-enforcement)
  - [Runtime Policy Management](#runtime-policy-management)
  - [Error Handling](#error-handling)
  - [Project Structure](#project-structure)
  - [License](#license)

---

## Features

- **Fine-grained policy enforcement** — allow or deny tool calls based on argument-level JSON Schema conditions
- **Principle of least privilege** — policies restrict agents to only what is needed for a specific task
- **Multiple policy sources** — load from a JSON file, a Python dict, or auto-generate with an LLM
- **LLM-generated policies** — automatically infer minimum-privilege policies from a user's query
- **Policy refinement** — incrementally tighten policies as the agent discovers information during a task
- **Graph-based capability surfacing** — SpiceDB-backed ReBAC and runtime taint tracking (IPI defence) via the integrated PDE engine in `janus/policy/pde/`
- **Built-in tools** — ready-to-use file system and command execution tools with workspace sandboxing
- **Custom tools** — define your own tools with `ToolDef` / `ToolParam`; Janus guards them automatically
- **10+ LLM providers** — OpenAI, Anthropic, Google Gemini, Azure OpenAI, AWS Bedrock, Ollama, vLLM, Together AI, OpenRouter
- **Framework adapters** — plug Janus enforcement into LangChain and Google ADK agents
- **Standalone enforcer** — use `PolicyEnforcer` independently in any agentic framework
- **Three fallback actions** — raise `PolicyViolation`, call `sys.exit`, or prompt the user interactively
- **Workspace isolation** — file tools are scoped to a directory; path-traversal attempts are rejected

---

## Installation

Requires Python ≥ 3.11. [uv](https://docs.astral.sh/uv/) is the recommended package manager.

**Core (OpenAI / OpenAI-compatible providers):**

```bash
uv add janus-guard
```

**With optional provider extras:**

```bash
uv add "janus-guard[anthropic]"   # Anthropic Claude
uv add "janus-guard[google]"      # Google Gemini
uv add "janus-guard[bedrock]"     # AWS Bedrock
uv add "janus-guard[langchain]"   # LangChain adapter
uv add "janus-guard[adk]"         # Google ADK adapter
uv add "janus-guard[all]"         # Everything
```

**For development:**

```bash
uv add "janus-guard[dev]"         # pytest, ruff, mypy
```

**Install from source:**

```bash
git clone https://github.com/Agentic-AI-Risk-Mitigation/Janus.git
cd janus
uv pip install -e .
```

---

## Quick Start

```python
from janus import JanusAgent

agent = JanusAgent(
    model="openai/gpt-4o",
    api_key="sk-...",                    # or set OPENAI_API_KEY env var
    use_builtin_tools=True,
    policy="policies.json",
    system_prompt="You are a helpful coding assistant.",
)

response = agent.run("List the Python files in the project.")
print(response)
```

### `JanusAgent` Parameters

| Parameter | Type | Default | Description |
|---|---|---|---|
| `model` | `str` | — | Model string: `"<provider>/<model-name>"` (e.g. `"openai/gpt-4o"`) |
| `system_prompt` | `str` | `"You are a helpful assistant."` | System message for the LLM |
| `tools` | `list[ToolDef] \| None` | `None` | Custom tool definitions to register |
| `use_builtin_tools` | `bool` | `True` | Register built-in file and command tools |
| `policy` | `str \| Path \| dict \| None` | `None` | Policy source (file path, dict, `"generate"`, or `None`) |
| `policy_model` | `str \| None` | `"gpt-4o-2024-08-06"` | Model used for LLM-based policy generation |
| `policy_engine`| `str` | `"janus"` | Enforcer engine to use (`"janus"` or `"pde"`) |
| `agent_role`   | `str` | `"coding_agent"` | The role assessed during `pde` taint tracking |
| `api_key` | `str \| None` | `None` | API key (falls back to provider's env var) |
| `workspace` | `str \| Path \| None` | `cwd` | Root directory for file-system tools |
| `max_tool_iterations` | `int` | `10` | Max tool-call cycles per `run()` call |
| `temperature` | `float` | `0.1` | Sampling temperature |
| `log_level` | `str \| None` | `"INFO"` | Logging level (`"DEBUG"`, `"INFO"`, `"WARNING"`) |

### `JanusAgent` Methods

| Method | Description |
|---|---|
| `run(user_input)` | Run the agent and return the final text response |
| `clear_history()` | Reset conversation history (keeps policy and tools) |
| `set_policy(policy)` | Load or replace the security policy at runtime |
| `get_policy()` | Return the current policy dict |
| `save_policy(path)` | Persist the current policy to a JSON file |
| `allow_tools(tools)` | Unconditionally allow the listed tools (highest priority) |
| `block_tools(tools)` | Unconditionally block the listed tools (highest priority) |
| `add_tool(tool)` | Register an additional tool at runtime |
| `remove_tool(name)` | Unregister a tool by name |
| `list_tools()` | Return names of all registered tools |
| `update_taint(risk)` | Update session taint risk monotonically (PDE engine only) |

---

## Policy Format

Policies are JSON documents that map tool names to lists of rules. Each rule specifies whether to allow or deny a tool call and can include argument-level restrictions.

### Full Format

```json
{
    "read_file": [
        {
            "priority": 1,
            "effect": 0,
            "conditions": {
                "file_path": {
                    "type": "string",
                    "pattern": "^reports/.*\\.csv$"
                }
            },
            "fallback": 0
        }
    ],
    "run_command": [
        {
            "priority": 1,
            "effect": 1,
            "conditions": {},
            "fallback": 0
        }
    ]
}
```

### Shorthand Format (conditions only)

When you only need to restrict argument values, the shorthand skips `priority`, `effect`, and `fallback` (defaulting to allow, priority 1, raise on violation):

```json
{
    "read_file": {
        "file_path": { "type": "string", "pattern": "^data/.*" }
    }
}
```

### Policy Rule Fields

| Field | Type | Description |
|---|---|---|
| `priority` | `int` | Evaluation order — lower value runs first |
| `effect` | `int` | `0` = allow, `1` = deny |
| `conditions` | `dict` | JSON Schema restrictions keyed by argument name |
| `fallback` | `int` | `0` = raise `PolicyViolation`, `1` = `sys.exit(1)`, `2` = ask user |

### Condition Schemas

Conditions follow JSON Schema syntax. Common patterns:

```json
{ "type": "string", "pattern": "^/safe/path/.*" }
{ "type": "string", "enum": ["ls", "pwd", "cat"] }
{ "type": "integer", "minimum": 0, "maximum": 100 }
{ "type": "array", "items": { "type": "string" } }
```

### Evaluation Logic

1. Rules for a tool are evaluated in ascending `priority` order.
2. **Allow rule** (`effect=0`): if all conditions pass → tool is allowed immediately.
3. **Deny rule** (`effect=1`): if all conditions match → tool is blocked using the configured `fallback`.
4. If no rule matches → the tool is **blocked by default**.
5. Tools not listed in the policy are blocked when a policy is loaded.

---

## LLM Providers

Use the `"<provider>/<model-name>"` format for the `model` parameter:

| Provider | Model string examples | Env var |
|---|---|---|
| **OpenAI** | `openai/gpt-4o`, `openai/gpt-4o-mini` | `OPENAI_API_KEY` |
| **Anthropic** | `anthropic/claude-3-5-sonnet-20241022` | `ANTHROPIC_API_KEY` |
| **Google Gemini** | `google/gemini-2.0-flash`, `gemini/gemini-1.5-pro` | `GOOGLE_API_KEY` / `GEMINI_API_KEY` |
| **Azure OpenAI** | `azure/<deployment-name>` | `AZURE_OPENAI_API_KEY` |
| **AWS Bedrock** | `bedrock/anthropic.claude-3-5-sonnet-20241022-v2:0` | AWS credentials |
| **Ollama** (local) | `ollama/llama3.2`, `ollama/mistral` | — |
| **vLLM** (local) | `vllm/meta-llama/Llama-3.3-70B-Instruct` | `VLLM_BASE_URL` |
| **Together AI** | `together/meta-llama/Llama-3-70b-chat-hf` | `TOGETHER_API_KEY` |
| **OpenRouter** | `openrouter/anthropic/claude-3.5-sonnet` | `OPENROUTER_API_KEY` |

**Example — Anthropic:**

```python
agent = JanusAgent(
    model="anthropic/claude-3-5-sonnet-20241022",
    policy="policies.json",
)
```

**Example — Ollama (local):**

```python
agent = JanusAgent(
    model="ollama/llama3.2",
    policy="policies.json",
)
```

**Example — Azure OpenAI:**

```python
agent = JanusAgent(
    model="azure/my-deployment",
    api_key="...",
    base_url="https://my-resource.openai.azure.com/",
    api_version="2024-02-01",
    policy="policies.json",
)
```

---

## Built-in Tools

When `use_builtin_tools=True` (the default), Janus registers these tools automatically:

### File Tools

| Tool | Description | Parameters |
|---|---|---|
| `read_file` | Read the full contents of a file | `file_path: str` |
| `write_file` | Create or overwrite a file | `file_path: str`, `content: str` |
| `edit_file` | Replace a unique string in a file | `file_path: str`, `old_string: str`, `new_string: str` |
| `list_directory` | List directory contents | `path: str` (default: workspace root) |

All file tools are scoped to the `workspace` directory — attempts to access paths outside the workspace are rejected.

### Command Tools

| Tool | Description | Parameters |
|---|---|---|
| `run_command` | Execute a shell command in the workspace | `command: str`, `timeout: int` (default: 60) |
| `fetch_url` | Fetch content from a URL via HTTP GET | `url: str` |

**Setting a workspace:**

```python
agent = JanusAgent(
    model="openai/gpt-4o",
    workspace="./my_project",   # file tools are sandboxed here
    policy="policies.json",
)
```

---

## Custom Tools

Define your own tools with `ToolDef` and `ToolParam`:

```python
from janus import JanusAgent, ToolDef, ToolParam

def search_database(query: str, limit: int = 10) -> str:
    # your implementation
    return f"Results for '{query}' (limit={limit})"

agent = JanusAgent(
    model="openai/gpt-4o",
    use_builtin_tools=False,    # disable built-ins if not needed
    tools=[
        ToolDef(
            name="search_database",
            description="Search the internal database for records matching a query.",
            params=[
                ToolParam("query", "string", "Search query string"),
                ToolParam("limit", "integer", "Maximum number of results", required=False, default=10),
            ],
            handler=search_database,
        )
    ],
    policy={
        "search_database": [
            {
                "priority": 1,
                "effect": 0,
                "conditions": {
                    "limit": {"type": "integer", "maximum": 50}
                },
                "fallback": 0,
            }
        ]
    },
)

response = agent.run("Find records about renewable energy.")
```

### `ToolParam` Fields

| Field | Type | Description |
|---|---|---|
| `name` | `str` | Parameter name (must match the handler's kwarg name) |
| `type` | `str` | JSON Schema type: `"string"`, `"integer"`, `"number"`, `"boolean"`, `"array"`, `"object"` |
| `description` | `str` | Description shown to the LLM |
| `required` | `bool` | Whether the parameter must be supplied (default: `True`) |
| `default` | `Any` | Default value when `required=False` |
| `enum` | `list \| None` | Restrict to a fixed set of allowed values |

---

## LLM-Generated Policies

Janus can automatically generate minimum-privilege policies by asking an LLM to infer what tools and restrictions are needed for a given user query:

```python
agent = JanusAgent(
    model="openai/gpt-4o",
    policy="generate",                        # generate on first run()
    policy_model="openai/gpt-4o-2024-08-06", # model used for generation
)

# Policy is generated from this query on the first call
response = agent.run("Read the file sales_2024.csv and summarize the totals.")

# Inspect and save the generated policy
print(agent.get_policy())
agent.save_policy("generated_policy.json")
```

### Standalone Policy Generation

Use `generate_policy` directly without a full agent:

```python
from janus import generate_policy, save_policy

tools = [
    {"name": "read_file", "description": "Read a file", "args": {...}},
    {"name": "run_command", "description": "Run a shell command", "args": {...}},
]

policy = generate_policy(
    query="Read the quarterly report and list the top 5 expenses.",
    tools=tools,
    model="gpt-4o-2024-08-06",
    manual_confirm=True,   # ask before applying
)

save_policy(policy, "policies.json")
```

### Policy Refinement

After an information-gathering tool call, tighten the policy using discovered values:

```python
from janus import refine_policy

updated = refine_policy(
    query="Send the invoice to the customer.",
    tools=tools,
    tool_call_params={"file_path": "invoices/inv_001.txt"},
    tool_call_result="Customer email: alice@example.com",
    current_policy=current_policy,
    model="gpt-4o-2024-08-06",
    manual_confirm=True,
)
```

---

## Framework Adapters

### LangChain

Three integration depths are available.

**Install:**

```bash
uv add "janus-guard[langchain]"
```

#### Depth 1 — Convert `ToolDef` list to secured `StructuredTool` list

Use when you build your own LangChain agent but want Janus-guarded tools:

```python
from janus.adapters.langchain import secure_langchain_tools
from langchain.agents import AgentExecutor, create_tool_calling_agent
from langchain_openai import ChatOpenAI

lc_tools = secure_langchain_tools(my_janus_tools, "policies.json")

llm = ChatOpenAI(model="gpt-4o")
agent = create_tool_calling_agent(llm, lc_tools, prompt)
executor = AgentExecutor(agent=agent, tools=lc_tools)
```

#### Depth 2 — Wrap existing LangChain tools

Use when you have an existing LangChain codebase and want to retrofit Janus enforcement:

```python
from janus.adapters.langchain import wrap_langchain_tools

# existing_tools is your existing list of LangChain BaseTool objects
existing_tools = wrap_langchain_tools(existing_tools, "policies.json")
# Pass to your existing AgentExecutor as usual
```

#### Depth 3 — `JanusLangChainAgent` (turnkey)

```python
from janus.adapters.langchain import JanusLangChainAgent

agent = JanusLangChainAgent(
    model="openai/gpt-4o",
    tools=my_janus_tools,
    policy="policies.json",
    system_prompt="You are a helpful assistant.",
    max_iterations=10,
)

response = agent.run("Summarize the quarterly results.")
agent.clear_history()
```

---

### Google ADK (Gemini)

Two integration depths are available.

**Install:**

```bash
uv add "janus-guard[adk]"
```

#### Depth 1 — Convert `ToolDef` list to ADK-native types

Use when you manage your own Gemini chat loop:

```python
from janus.adapters.adk import secure_adk_tools
from google import genai
from google.genai import types

declarations, handlers = secure_adk_tools(my_janus_tools, "policies.json")

client = genai.Client(api_key="...")
config = types.GenerateContentConfig(
    tools=[types.Tool(function_declarations=declarations)],
    system_instruction="You are helpful.",
    automatic_function_calling=types.AutomaticFunctionCallingConfig(disable=True),
)
chat = client.chats.create(model="gemini-2.0-flash", config=config)

response = chat.send_message("List the files in the project.")
while response.function_calls:
    fc = response.function_calls[0]
    result = handlers[fc.name](**dict(fc.args))
    response = chat.send_message(
        types.Part.from_function_response(fc.name, {"result": result})
    )
print(response.text)
```

#### Depth 2 — `JanusADKAgent` (turnkey)

```python
from janus.adapters.adk import JanusADKAgent

agent = JanusADKAgent(
    model="gemini-2.0-flash",
    tools=my_janus_tools,
    policy="policies.json",
    system_prompt="You are a helpful assistant.",
    max_tool_iterations=10,
)

response = agent.run("What Python files are in the workspace?")
agent.clear_history()
```

---

## Standalone Policy Enforcement

`PolicyEnforcer` can be used independently in any agentic framework — just call `enforce()` before executing any tool:

```python
from janus.policy import PolicyEnforcer
from janus import PolicyViolation

enforcer = PolicyEnforcer()
enforcer.load("policies.json")

# Before executing a tool:
try:
    enforcer.enforce("read_file", {"file_path": "data/report.csv"})
    result = read_file("data/report.csv")   # proceeds normally
except PolicyViolation as exc:
    print(f"Blocked: {exc.reason}")

# Programmatic policy updates:
enforcer.allow_tools(["list_directory"])    # unconditional allow
enforcer.block_tools(["run_command"])       # unconditional deny
enforcer.update({"write_file": [(1, 0, {}, 0)]})  # merge additional rules
```

---

## Runtime Policy Management

Policies can be inspected and modified after the agent is created:

```python
# Load a new policy
agent.set_policy("new_policies.json")
agent.set_policy({"read_file": [{"priority": 1, "effect": 0, "conditions": {}, "fallback": 0}]})

# Inspect the current policy
print(agent.get_policy())

# Save to disk
agent.save_policy("saved_policy.json")

# Allow / block specific tools at runtime
agent.allow_tools(["list_directory", "read_file"])
agent.block_tools(["run_command"])

# Manage tools
agent.add_tool(my_new_tool)
agent.remove_tool("old_tool")
print(agent.list_tools())
```

---

## Error Handling

All Janus exceptions inherit from `JanusError`:

```python
from janus import (
    JanusError,
    PolicyViolation,         # tool call blocked by policy
    ArgumentValidationError, # argument failed JSON Schema check
    PolicyLoadError,         # policy file not found or invalid JSON
    PolicyGenerationError,   # LLM-based generation failed
    ToolNotFoundError,       # tool name not registered
    ProviderError,           # LLM provider error
)

try:
    response = agent.run("Delete all log files.")
except PolicyViolation as exc:
    print(f"Tool '{exc.tool_name}' was blocked.")
    print(f"Reason: {exc.reason}")
    print(f"Arguments: {exc.arguments}")
except JanusError as exc:
    print(f"Janus error: {exc}")
```

---

## Project Structure

```
janus/
├── agent.py                  # JanusAgent — main entry point
├── exceptions.py             # Custom exception classes
├── logger.py                 # Structured logging utilities
├── __init__.py               # Public API re-exports
│
├── llm/
│   ├── base.py               # BaseLLMProvider interface
│   ├── runner.py             # LLMRunner — conversation loop
│   ├── response_types.py     # Provider response types
│   └── providers/
│       ├── openai_provider.py
│       ├── anthropic_provider.py
│       ├── google_provider.py
│       ├── azure_provider.py
│       ├── bedrock_provider.py
│       ├── ollama_provider.py
│       ├── vllm_provider.py
│       ├── together_provider.py
│       └── openrouter_provider.py
│
├── policy/
│   ├── enforcer.py           # PolicyEnforcer — JSON Schema rule engine
│   ├── pde_enforcer.py       # PDEEnforcer — adapter for SpiceDB/taint engine
│   ├── pde/                  # SpiceDB-backed ReBAC + taint (PDE)
│   │   ├── config.py         # SCHEMA, TOOL_TAINT_LIMIT, RISK_TO_TAINT
│   │   ├── interceptor.py    # GraphInterceptor — taint gate + SpiceDB ACL
│   │   ├── discovery.py      # GraphDiscoveryEngine
│   │   └── bootstrap.py     # make_client, bootstrap, Session, allow_tool
│   ├── generator.py          # LLM-based policy generation & refinement
│   ├── loader.py             # JSON parsing and policy persistence
│   └── validator.py          # JSON Schema argument validation
│
├── tools/
│   ├── base.py               # ToolDef, ToolParam dataclasses
│   ├── registry.py           # ToolRegistry — manages registered tools
│   └── builtin/
│       ├── file_tools.py     # read_file, write_file, edit_file, list_directory
│       └── command_tools.py  # run_command, fetch_url
│
└── adapters/
    ├── _base.py              # Shared adapter utilities
    ├── langchain.py          # LangChain integration
    └── adk.py                # Google ADK (Gemini) integration

examples/                     # Demo scenario framework + FastAPI web app + docker-compose.yml for SpiceDB
```

---

## Running the Demo Web App

The demo web UI lives in `examples/app.py`. Use the repo-root helper script if you want a single command that starts the FastAPI app without remembering the uvicorn import path.

```bash
# Install demo/example dependencies from the repo checkout
uv sync --extra langchain --extra dev
# or: uv sync --extra all --extra dev

# Start the web UI on http://127.0.0.1:8000
./scripts/run_demo_webapp.sh

# Start the web UI and bring up SpiceDB for Demo 5
./scripts/run_demo_webapp.sh --with-spicedb
```

The script also supports `--host`, `--port`, and `--no-reload`.

## Running PDE (SpiceDB) Demos

The PDE engine requires a running SpiceDB instance. From a source checkout, install the demo dependencies first; the example runner imports LangChain. Then use the demo app's Docker setup:

**Prerequisites:** Docker installed and running.

```bash
# Install demo/example dependencies from the repo checkout
uv sync --extra langchain --extra dev
# or: uv sync --extra all --extra dev

# Optional smoke test without SpiceDB
uv run python -m examples.run demo1_poisoned_readme --protected

# Start SpiceDB (from project root)
cd examples && docker compose up -d && cd ..

# Run Demo 5 (taint cascade) with PDE — requires SpiceDB
uv run python -m examples.run demo5_taint_cascade --protected

# Stop SpiceDB when done
cd examples && docker compose stop && cd ..
```

**What Demo 5 exercises:**
- ACL-granted tools (readonly, developer roles) pass at low taint
- Python taint gate blocks tools when `current_taint > TOOL_TAINT_LIMIT[tool]`
- After `fetch_url` (medium risk), taint rises; `git_push` (limit 20) is blocked
- Full IPI scenario: agent reads external content → taint increases → dangerous tools blocked

---

## License

MIT
