Metadata-Version: 2.4
Name: adversaria
Version: 0.1.0
Summary: Adversarial Testing Harness for Large Language Models
Home-page: https://github.com/adversaria/adversaria
Author: Adversaria Contributors
Project-URL: Bug Reports, https://github.com/adversaria/adversaria/issues
Project-URL: Source, https://github.com/adversaria/adversaria
Keywords: llm security testing adversarial ai
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Testing
Classifier: Topic :: Security
Requires-Python: >=3.8
Description-Content-Type: text/markdown
Requires-Dist: pyyaml>=6.0
Dynamic: author
Dynamic: classifier
Dynamic: description
Dynamic: description-content-type
Dynamic: home-page
Dynamic: keywords
Dynamic: project-url
Dynamic: requires-dist
Dynamic: requires-python
Dynamic: summary

# Adversaria Python SDK

Python SDK for Adversaria - Adversarial Testing Harness for LLMs.

## Installation

```bash
pip install adversaria
```

**Note**: This SDK requires the Adversaria CLI to be installed:
```bash
cargo install adversaria
```

## Quick Start

```python
from adversaria import Adversaria

# Create client
client = Adversaria()

# Run tests
result = client.run(
    provider="openai",
    model="gpt-4",
    api_key="sk-..."  # Optional, uses OPENAI_API_KEY env var
)

# Check results
print(f"Risk Score: {result.risk_score}/100")
print(f"Successful Attacks: {result.successful_attacks}/{result.total_attacks}")

# Save report
report_path = result.save_report("./reports")
print(f"Report saved: {report_path}")
```

## API Reference

### Adversaria

Main client class for running security tests.

```python
client = Adversaria(config_path=None)
```

**Methods:**

#### `run(provider, model, suites=None, api_key=None)`

Run security tests against an LLM.

**Arguments:**
- `provider` (str): Provider name ('openai', 'anthropic', 'ollama')
- `model` (str): Model name
- `suites` (list, optional): List of suite IDs to run
- `api_key` (str, optional): API key (uses env var if not provided)

**Returns:** `TestResult`

#### `list_suites()`

List available attack suites.

**Returns:** `List[Suite]`

#### `list_reports(directory="./reports")`

List available reports.

**Returns:** `List[str]` - List of report file paths

#### `load_report(filepath)`

Load a report from file.

**Returns:** `TestResult`

### TestResult

Test execution results.

**Attributes:**
- `id` (str): Unique test run ID
- `model` (str): Model tested
- `provider` (str): Provider used
- `timestamp` (str): Test timestamp
- `risk_score` (int): Overall risk score (0-100)
- `total_attacks` (int): Total attacks executed
- `successful_attacks` (int): Number of successful attacks
- `failed_attacks` (int): Number of failed attacks
- `duration_ms` (int): Execution time in milliseconds
- `raw_data` (dict): Full report data

**Methods:**

#### `save_report(directory="./reports")`

Save report to directory.

**Returns:** `str` - Path to saved report

## Examples

### Basic Testing

```python
import os
from adversaria import Adversaria

# Set API key
os.environ["OPENAI_API_KEY"] = "sk-..."

# Run test
client = Adversaria()
result = client.run(provider="openai", model="gpt-4")

if result.risk_score > 50:
    print("⚠️ High risk detected!")
```

### Specific Suites

```python
result = client.run(
    provider="openai",
    model="gpt-4",
    suites=["prompt_injection", "jailbreak"]
)
```

### Multiple Providers

```python
providers = [
    ("openai", "gpt-4"),
    ("anthropic", "claude-3-opus-20240229"),
]

for provider, model in providers:
    result = client.run(provider=provider, model=model)
    print(f"{provider}/{model}: {result.risk_score}/100")
```

### Load Previous Report

```python
client = Adversaria()

# List all reports
reports = client.list_reports()

# Load latest report
if reports:
    result = client.load_report(reports[0])
    print(f"Risk Score: {result.risk_score}/100")
```

## Requirements

- Python 3.8+
- Adversaria CLI (`cargo install adversaria`)
- PyYAML

## License

MIT
