Metadata-Version: 2.4
Name: wl-guardrails
Version: 0.1.0
Summary: Python SDK for Watchlight Guardrails — AI content safety and guardrails-as-code engine
Project-URL: Homepage, https://www.watchlight.ai
Project-URL: Documentation, https://github.com/watchlight-ai-beacon/Watchlight-Beacon/tree/main/wl-guardrails/sdk/python#readme
Project-URL: Source, https://github.com/watchlight-ai-beacon/Watchlight-Beacon/tree/main/wl-guardrails/sdk/python
Project-URL: Issues, https://github.com/watchlight-ai-beacon/Watchlight-Beacon/issues
Author-email: Watchlight AI Team <team@watchlight.ai>
License-Expression: LicenseRef-Proprietary
License-File: LICENSE
Keywords: ai,content-safety,guardrails,llm,moderation,policy,rego,watchlight
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: Other/Proprietary License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Security
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Typing :: Typed
Requires-Python: >=3.10
Requires-Dist: httpx>=0.27.0
Requires-Dist: pydantic>=2.0.0
Provides-Extra: dev
Requires-Dist: mypy>=1.9.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.23.0; extra == 'dev'
Requires-Dist: pytest-httpx>=0.30.0; extra == 'dev'
Requires-Dist: pytest>=8.0.0; extra == 'dev'
Requires-Dist: ruff>=0.3.0; extra == 'dev'
Description-Content-Type: text/markdown

# WL-Guardrails Python SDK

Python SDK for [Watchlight Guardrails](https://www.watchlight.ai) — AI content safety and guardrails-as-code engine. Check input and output content against configurable safety policies with support for blocking, sanitization, and fail-open/fail-closed modes.

## Features

- **Input/Output Checking**: Validate user messages and LLM responses against safety policies
- **Async/Sync Support**: Both async and sync HTTP clients
- **Fail-Open/Fail-Closed**: Configurable behavior when the service is unavailable
- **Content Sanitization**: Automatically redact PII and sensitive content
- **Typed Models**: Pydantic v2 models for all request/response types
- **Observable**: Structured logging with request correlation IDs

## Installation

```bash
pip install wl-guardrails
```

## Quick Start

### Async Client

```python
from wl_guardrails import WlGuardrailsClient, GuardrailBlockError

async with WlGuardrailsClient("http://localhost:8083") as client:
    try:
        result = await client.check_input("User message here")
        # Content passed all checks
        print(f"Checks run: {result.checks_run}")
    except GuardrailBlockError as e:
        print(f"Blocked: {e.error_code} - {e.guardrail_message}")
```

### Sync Client

```python
from wl_guardrails import WlGuardrailsSyncClient, GuardrailBlockError

with WlGuardrailsSyncClient("http://localhost:8083") as client:
    # Check LLM output before returning to user
    result = client.check_output(llm_response)

    if result.is_sanitized:
        # Use the sanitized version
        safe_response = result.sanitized
    elif result.is_passed:
        safe_response = llm_response
```

## Content Checking

### Check Input

Validate user/agent messages before processing:

```python
result = await client.check_input(
    content="user message",
    request_id="trace-123",       # Optional correlation ID
    metadata={"agent_id": "a1"},  # Optional policy context
)
```

### Check Output

Validate LLM responses before returning to the user:

```python
result = await client.check_output(
    content=llm_response,
    request_id="trace-123",
)

if result.is_sanitized:
    # PII or sensitive content was redacted
    return result.sanitized
```

### Check Results

Every check returns a `CheckResult`:

```python
result.action        # GuardrailAction: PASS, BLOCK, or SANITIZE
result.is_passed     # True if content passed all checks
result.is_blocked    # True if content was blocked
result.is_sanitized  # True if content was sanitized
result.violations    # List of Violation objects
result.sanitized     # Sanitized content (if action is SANITIZE)
result.checks_run    # Names of checks that were executed
result.request_id    # Correlation ID for tracing
```

## Fail Modes

Configure behavior when the guardrails service is unavailable:

```python
# Fail-open (default): proceed without guardrails on service errors
client = WlGuardrailsClient(fail_mode="open")

# Fail-closed: raise ServiceUnavailable on service errors
client = WlGuardrailsClient(fail_mode="closed")
```

Or set via environment variable:

```bash
export WL_GUARDRAILS_FAIL_MODE=closed
```

## Configuration

| Environment Variable | Default | Description |
|---------------------|---------|-------------|
| `WL_GUARDRAILS_URL` | `http://localhost:8083` | Guardrails service URL |
| `WL_GUARDRAILS_FAIL_MODE` | `open` | Fail mode: `open` or `closed` |

## Error Handling

```python
from wl_guardrails import (
    GuardrailBlockError,    # Content was blocked by a policy
    ServiceUnavailable,     # Service is unreachable (fail_mode="closed")
    ValidationError,        # Invalid request (e.g., empty content)
    WlGuardrailsError,      # Base exception for all SDK errors
)
```

## Health Checks

```python
# Simple boolean check
is_healthy = await client.health()

# Detailed health info
health = await client.get_health()
print(f"Status: {health.status}, Policies loaded: {health.policies_loaded}")
```

## Requirements

- Python 3.10+
- WL-Guardrails service running

## License

Proprietary. See [LICENSE](LICENSE) for details.

## Support

- **Partner Portal**: https://www.watchlight.ai/partner
- **Email**: team@watchlight.ai
