Metadata-Version: 2.4
Name: rait-connector
Version: 0.7.0
Summary: Python library for evaluating LLM outputs across multiple ethical dimensions and performance metrics using Azure AI Evaluation services.
License-File: LICENSE
Classifier: Development Status :: 3 - Alpha
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.12
Requires-Dist: apscheduler>=3.11.2
Requires-Dist: azure-ai-evaluation>=1.12.0
Requires-Dist: azure-monitor-query>=2.0.0
Requires-Dist: cryptography>=46.0.3
Requires-Dist: pydantic-settings>=2.12.0
Description-Content-Type: text/markdown

# RAIT Connector

Python library for evaluating LLM outputs across multiple ethical dimensions and performance metrics using Azure AI Evaluation services.

## Features

- **22 Evaluation Metrics** across 8 ethical dimensions
- **Parallel Execution** for faster evaluations
- **Automatic API Integration** with RAIT services
- **Type-Safe** with Pydantic models
- **Flexible Configuration** via environment variables or direct parameters
- **Batch Processing** with custom callbacks
- **Scheduler** for recurring telemetry and calibration jobs
- **Comprehensive Documentation** with examples

## Installation

```bash
pip install rait-connector
```

Or with uv:

```bash
uv add rait-connector
```

## Quick Start

```python
from rait_connector import RAITClient

# Initialize client
client = RAITClient()

# Evaluate a single prompt
result = client.evaluate(
    prompt_id="123",
    prompt_url="https://example.com/123",
    timestamp="2025-12-11T10:00:00+00:00",
    model_name="gpt-4",
    model_version="1.0",
    query="What is AI?",
    response="AI is artificial intelligence...",
    environment="production",
    purpose="monitoring"
)

print(f"Evaluation complete: {result['prompt_id']}")
```

## Configuration

### Environment Variables

Set required environment variables:

```bash
# RAIT API
export RAIT_API_URL="https://api.raitracker.com"
export RAIT_CLIENT_ID="your-client-id"
export RAIT_CLIENT_SECRET="your-client-secret"
```

```bash
# Azure OpenAI
export AZURE_OPENAI_ENDPOINT="https://your.openai.azure.com"
export AZURE_OPENAI_API_KEY="your-api-key"
export AZURE_OPENAI_DEPLOYMENT="your-deployment"
export AZURE_OPENAI_API_VERSION="2024-12-01-preview"  # optional, this is the default
```

```bash
# Azure AD
export AZURE_CLIENT_ID="your-azure-client-id"
export AZURE_TENANT_ID="your-azure-tenant-id"
export AZURE_CLIENT_SECRET="your-azure-client-secret"
```

```bash
# Azure Resources
export AZURE_SUBSCRIPTION_ID="your-subscription-id"
export AZURE_RESOURCE_GROUP="your-resource-group"
export AZURE_PROJECT_NAME="your-project-name"
export AZURE_ACCOUNT_NAME="your-account-name"
export AZURE_AI_PROJECT_URL="https://your.ai.azure.com/..."  # optional
export AZURE_LOG_ANALYTICS_WORKSPACE_ID="your-workspace-id"  # optional, for telemetry queries
```

```bash
# RAIT Ingest
export RAIT_INGEST_URL="https://your-ingest-endpoint"  # required — all log types route through here
```

### Direct Configuration

Or pass configuration directly:

```python
client = RAITClient(
    rait_api_url="https://api.raitracker.com",
    rait_client_id="your-client-id",
    rait_client_secret="your-secret",
    azure_openai_endpoint="https://your.openai.azure.com",
    azure_openai_api_key="your-key",
    azure_openai_deployment="gpt-4",
    # ... other parameters
)
```

## Evaluation Metrics

RAIT Connector supports 22 metrics across 8 ethical dimensions:

| Dimension | Metrics |
|-----------|---------|
| **Bias and Fairness** | Hate and Unfairness |
| **Explainability and Transparency** | Ungrounded Attributes, Groundedness, Groundedness Pro |
| **Monitoring and Compliance** | Content Safety |
| **Legal and Regulatory Compliance** | Protected Materials |
| **Security and Adversarial Robustness** | Code Vulnerability |
| **Model Performance** | Coherence, Fluency, QA, Similarity, F1 Score, BLEU, GLEU, ROUGE, METEOR, Retrieval |
| **Human-AI Interaction** | Relevance, Response Completeness |
| **Social and Demographic Impact** | Sexual, Violence, Self-Harm |

## Batch Evaluation

Evaluate multiple prompts efficiently:

```python
prompts = [
    {
        "prompt_id": "001",
        "prompt_url": "https://example.com/001",
        "timestamp": "2025-12-11T10:00:00+00:00",
        "model_name": "gpt-4",
        "model_version": "1.0",
        "query": "What is AI?",
        "response": "AI is...",
        "environment": "production",
        "purpose": "monitoring"
    },
    # ... more prompts
]

summary = client.evaluate_batch(prompts)
print(f"Completed: {summary['successful']}/{summary['total']}")
```

### With Custom Callback

```python
def on_complete(summary):
    print(f"Success: {summary['successful']}")
    print(f"Failed: {summary['failed']}")

client.evaluate_batch(prompts, on_complete=on_complete)
```

## Calibration

### Automatic Background Calibration

When you call `evaluate()`, the client automatically:

1. Checks the API for calibration prompts
2. If available, runs calibration in the background (once per model/version/environment)
3. Evaluates calibration prompts with pre-defined responses

This happens automatically - no manual intervention needed!

## Scheduler

Run recurring telemetry and calibration jobs automatically:

```python
from rait_connector import RAITClient, Scheduler

client = RAITClient()
scheduler = Scheduler(client)

scheduler.add_telemetry_job(
    model_name="gpt-4",
    model_version="1.0",
    model_environment="production",
    model_purpose="monitoring",
    interval="daily",
)
scheduler.add_calibration_job(
    model_name="gpt-4",
    model_version="1.0",
    environment="production",
    model_purpose="monitoring",
    invoke_model=lambda prompt: my_llm(prompt),
    interval="weekly",
)

scheduler.start()  # runs in background

# Inspect job state
print(scheduler.status())   # registered jobs and next run time
print(scheduler.history())  # past execution records
```

Supports named intervals (`"hourly"`, `"daily"`, `"weekly"`), cron expressions, `timedelta`, or raw seconds. Custom jobs can be registered via `add_job()` or the `@scheduler.job()` decorator.

## Parallel Execution

Control parallelism for faster evaluations:

```python
result = client.evaluate(
    ...,
    parallel=True,
    max_workers=10  # Use 10 parallel workers
)
```

## Documentation

Full documentation is published at **<https://responsible-systems.github.io/rait_connector/>**.

Versioned docs are available for each release (e.g. `/0.6.0/`). The `latest` alias always points to the most recent release.

## Requirements

- Python 3.12+
- Azure OpenAI access
- RAIT API credentials

