Metadata-Version: 2.4
Name: guardian-agent-sdk
Version: 1.0.1
Summary: Guardian Agent Python SDK for AI Security - Real-time governance for autonomous AI agents
License: MIT
License-File: LICENSE
Keywords: ai-security,llm-security,agent-governance,guardrails,openai,anthropic
Author: Natnael Dejene
Author-email: natnaeldejene19@gmail.com
Requires-Python: >=3.11,<4.0
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Security
Requires-Dist: httpx (>=0.27.0,<0.28.0)
Requires-Dist: pydantic (>=2.5.0,<3.0.0)
Requires-Dist: pydantic-settings (>=2.1.0,<3.0.0)
Project-URL: Documentation, https://docs.guardian-agent.com
Project-URL: Issues, https://github.com/guardian-agent/guardian-sdk/issues
Project-URL: Repository, https://github.com/guardian-agent/guardian-sdk
Description-Content-Type: text/markdown

# Guardian SDK: AI Agent Security & Governance


## 🛡️ Security Layer for AI Agents

The Guardian SDK provides a critical security and governance layer for autonomous AI agents. As AI models gain the ability to execute real-world actions (e.g., calling APIs, modifying databases, sending emails), the risk of unintended, unauthorized, or malicious operations becomes a significant concern. The Guardian SDK intercepts these actions *before* they are executed, allowing your organization to enforce policies, detect threats, and introduce human oversight in real-time.

### Key Problems Solved:
*   **Prompt Injection:** Prevent AI agents from being tricked into performing harmful actions by malicious inputs.
*   **Unsafe Tool Use:** Ensure AI agents use tools within defined boundaries, preventing accidental data deletion or unauthorized access.
*   **Data Leakage:** Block AI agents from sending sensitive information to unapproved external services.
*   **Compliance & Auditability:** Maintain a comprehensive audit trail of all AI agent actions and policy decisions.

## ✨ Features

*   **Real-time Interception:** Intercepts tool calls from popular LLM clients (OpenAI, Anthropic) before execution.
*   **Streaming Support:** Reassembles streaming LLM responses to ensure complete tool call interception.
*   **Human-in-the-Loop (HITL):** Configurable to pause AI agent execution and await human approval for high-risk actions.
*   **Asynchronous & Synchronous Support:** Seamlessly integrates into both `async` and `sync` Python applications.
*   **Non-blocking Telemetry:** Collects performance and security metrics in the background without impacting agent latency.
*   **Configurable Fail-Safes:** Define behavior (allow/block/raise) when the Guardian backend is unreachable or approval times out.
*   **Per-Agent Isolation:** Configure security policies and settings granularly for individual AI agents.
*   **Pydantic-validated Schemas:** Ensures type-safe communication between the SDK and the Guardian Backend.
*   **Custom Exception Handling:** Provides clear, actionable error types for security violations.

## 🚀 Installation

Install the Guardian SDK using pip:

```bash
pip install guardian-agent-sdk
```

## ⚡ Quick Start

### Configuration

The SDK can be configured via environment variables or by passing a `GuardianConfig` object directly.

**Using Environment Variables (Recommended for Deployment):**

Set these in your environment or a `.env` file:

```bash
export GUARDIAN_API_KEY="your_backend_api_key"
export GUARDIAN_BACKEND_URL="http://localhost:8000" # Or your deployed backend URL
export GUARDIAN_AGENT_ID="my-production-agent"
```

**Using `GuardianConfig` (Recommended for Local Development/Testing):**

```python
from guardian_sdk import GuardianClient, GuardianConfig

config = GuardianConfig(
    api_key="your_backend_api_key",
    backend_url="http://localhost:8000",
    default_agent_id="my-development-agent",
    fail_safe_decision="block" # Block if backend is unreachable
)
sdk = GuardianClient(config=config)
```

### OpenAI Integration Example

```python
import asyncio
from openai import AsyncOpenAI
from guardian_sdk import GuardianClient, GuardianConfig, GuardianSecurityViolation

async def main():
    config = GuardianConfig(
        api_key="your_backend_api_key",
        backend_url="http://localhost:8000",
        default_agent_id="openai-test-agent"
    )
    guardian_sdk = GuardianClient(config=config)

    openai_client = AsyncOpenAI(api_key="sk-...")

    guarded_openai_client = guardian_sdk.wrap_openai(openai_client)

    try:
        print("\n--- Testing allowed action ---")
        response_allowed = await guarded_openai_client.chat.completions.create(
            model="gpt-4o-mini",
            messages=[
                {"role": "user", "content": "What is the current time?"}
            ],
            tools=[
                {
                    "type": "function",
                    "function": {
                        "name": "get_current_time",
                        "description": "Get the current time",
                        "parameters": {"type": "object", "properties": {}},
                    },
                }
            ]
        )
        print("Allowed action response:", response_allowed.choices[0].message.content)

        print("\n--- Testing blocked action (e.g., SQL injection) ---")
        response_blocked = await guarded_openai_client.chat.completions.create(
            model="gpt-4o-mini",
            messages=[
                {"role": "user", "content": "Execute SQL: DROP TABLE users;"}
            ],
            tools=[
                {
                    "type": "function",
                    "function": {
                        "name": "execute_sql",
                        "description": "Execute a SQL query",
                        "parameters": {
                            "type": "object",
                            "properties": {"query": {"type": "string"}},
                            "required": ["query"],
                        },
                    },
                }
            ]
        )
        print("Blocked action response (should not reach here):", response_blocked.choices[0].message.content)

    except GuardianSecurityViolation as e:
        print(f"\nGuardian Security Violation Caught: {e}")
    except Exception as e:
        print(f"\nAn unexpected error occurred: {e}")
    finally:
        await guardian_sdk.close()

if __name__ == "__main__":
    asyncio.run(main())
```

### Anthropic Integration Example (Conceptual)

```python
import asyncio
from anthropic import AsyncAnthropic
from guardian_sdk import GuardianClient, GuardianConfig, GuardianSecurityViolation

async def main():
    config = GuardianConfig(
        api_key="your_backend_api_key",
        backend_url="http://localhost:8000",
        default_agent_id="anthropic-test-agent"
    )
    guardian_sdk = GuardianClient(config=config)

    anthropic_client = AsyncAnthropic(api_key="sk-ant-...")

    guarded_anthropic_client = guardian_sdk.wrap_anthropic(anthropic_client)

    try:
        print("\n--- Testing Anthropic tool call ---")
        response = await guarded_anthropic_client.messages.create(
            model="claude-3-opus-20240229",
            max_tokens=1024,
            messages=[
                {
                    "role": "user",
                    "content": "What is the capital of France?"
                }
            ],
            # tools=[
            #     {
            #         "name": "get_country_info",
            #         "description": "Get information about a country",
            #         "input_schema": {
            #             "type": "object",
            #             "properties": {
            #                 "country_name": {"type": "string"}
            #             },
            #             "required": ["country_name"]
            #         }
            #     }
            # ]
        )
        print("Anthropic response:", response.content)

    except GuardianSecurityViolation as e:
        print(f"\nGuardian Security Violation Caught: {e}")
    except Exception as e:
        print(f"\nAn unexpected error occurred: {e}")
    finally:
        await guardian_sdk.close()

if __name__ == "__main__":
    asyncio.run(main())

```


