Metadata-Version: 2.3
Name: agentstax-eval
Version: 0.1.0
Summary: A lightweight Python library for evaluating AI agents and RAG pipelines.
Author: Brandon Cate
Author-email: Brandon Cate <brandoncate95@gmail.com>
License: MIT
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Software Development :: Testing
Requires-Dist: anthropic>=0.80.0 ; extra == 'anthropic'
Requires-Dist: google-genai>=1.0.0 ; extra == 'google'
Requires-Dist: openai>=1.0.0 ; extra == 'openai'
Requires-Python: >=3.9
Project-URL: Homepage, https://github.com/agentstax/eval-python-sdk
Project-URL: Repository, https://github.com/agentstax/eval-python-sdk
Project-URL: Issues, https://github.com/agentstax/eval-python-sdk/issues
Provides-Extra: anthropic
Provides-Extra: google
Provides-Extra: openai
Description-Content-Type: text/markdown

# agentstax-eval

A lightweight Python library for evaluating AI agents and RAG pipelines. No magic, no vendor lock-in — just explicit, readable evaluation logic.

[![PyPI version](https://img.shields.io/pypi/v/agentstax-eval)](https://pypi.org/project/agentstax-eval/)
[![Python 3.9+](https://img.shields.io/badge/python-3.9%2B-blue)](https://python.org)
[![License: MIT](https://img.shields.io/badge/license-MIT-green)](LICENSE)

---

## Table of Contents

- [Key Features](#key-features)
- [Installation](#installation)
- [Why agentstax-eval](#why-agentstax-eval)
- [Quickstart](#quickstart)
- [Metrics](#metrics)
- [Providers](#providers)
- [Monitoring Dashboard](#monitoring-dashboard)
- [Core API](#core-api)
- [Framework Auto-Extraction](#framework-auto-extraction)
- [Caching](#caching)
- [Multi-Agent Evaluation](#multi-agent-evaluation)
- [Async Support](#async-support)
- [CI / Regression Testing](#ci--regression-testing)
- [Further Reading](#further-reading)
- [Contributing](#contributing)
- [License](#license)

---

## Key Features

- **Four objects, one job each** — `Dataset`, `Task`, `Evaluation`, `Pipeline`.
- **Zero required dependencies** — the core library installs with no third-party packages.
- **Bring your own LLM** — pass any `fn(prompt: str) -> str` as a judge. No default provider.
- **Metrics are functions** — `fn(dataset_row: dict) -> float`. No base classes, no decorators.
- **Built-in LLM-as-judge** — correctness, relevance, faithfulness, completeness, rubric.
- **Framework auto-extraction** — pass a LangGraph, Google ADK, OpenAI Agents, CrewAI, LlamaIndex, or MSAF agent to `Pipeline` and get topology, model, tools, and system prompt extracted automatically.
- **Agent fingerprinting** — topology changes are hashed, so caches auto-invalidate and the monitoring dashboard detects architecture drift.
- **LLM response caching** — `DiskCache` and `MemoryCache` keyed on judge + prompt + agent fingerprint.
- **Real-time monitoring** — pair with [agentstax-eval-monitor](#monitoring-dashboard) for a live dashboard with regression detection, metric trends, and agent network visualization.
- **CI-ready** — `assert_passing()`, `failures()`, and JSON save/load for pytest regression tests.

---

## Installation

```bash
pip install agentstax-eval
```

**Requires:** Python 3.9+. Zero core dependencies.

**Optional extras** for built-in provider functions:

| Extra | Install command | Adds |
|-------|----------------|------|
| `openai` | `pip install openai` | `openai` SDK |
| `anthropic` | `pip install anthropic` | `anthropic` SDK |
| `google` | `pip install google` | `google-genai` SDK |

---

## Why agentstax-eval

All the eval frameworks I have used feel more complicated than they should. They are either too heavy, too complicated, or require vendor lock-in.

I often find myself creating a simple script to test what I want, which works great in the beginning but shows its weakness later on.

I built agentstax-eval to fix that. It's simple when you need it, but flexible enough to grow with your agent architecture.

It works with whatever you're already using — you can pass your LangGraph, ADK, OpenAI Agents, CrewAI, LlamaIndex, or MSAF agent directly to Pipeline. It will walk the hierarchy, extract the topology, and fingerprint it. When your architecture changes, the cache invalidates and the [monitor](#monitoring-dashboard) flags the drift.

Pair it with [agentstax-eval-monitor](#monitoring-dashboard) and you get a live dashboard that shows regression detection, metric trends, and agent network visualization.

---

## Quickstart

```python
from openai import OpenAI
from agentstax_eval import Pipeline, Dataset, Task, Evaluation
from agentstax_eval.metrics import llm_correctness
from agentstax_eval.providers import openai_provider

client = OpenAI()

dataset = Dataset([
    {"question": "What is the capital of France?", "expected_answer": "Paris"},
    {"question": "Who wrote Hamlet?",              "expected_answer": "Shakespeare"},
    {"question": "What is 2 + 2?",                "expected_answer": "4"},
])

def get_answer(dataset_row: dict) -> dict:
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": dataset_row["question"]}],
    )
    return {"answer": response.choices[0].message.content}

pipeline = Pipeline(
    dataset=dataset,
    tasks=[Task(get_answer)],
    evaluation=Evaluation(metrics=[llm_correctness(llm=openai_provider())]),
)

results = pipeline.run()
results.save(directory="results", base_filename="eval")
print(results)
```

### With agent auto-extraction

Pass your agent object directly to unlock automatic metadata extraction, topology mapping, and smart caching:

```python
from agentstax_eval import Pipeline, Dataset, Task, Evaluation, DiskCache
from agentstax_eval.metrics import llm_correctness
from agentstax_eval.providers import openai_provider

pipeline = Pipeline(
    dataset=dataset,
    tasks=[Task(get_answer)],
    evaluation=Evaluation(metrics=[llm_correctness(llm=openai_provider())]),
    agent=my_langgraph_agent,           # auto-extracts topology, model, tools
    cache=DiskCache(".cache"),           # caches judge responses, invalidates on topology change
)

results = pipeline.run()
results.save(directory="results", base_filename="my_agent_eval")
```

The saved JSON now includes a `topology` field with the full agent graph, fingerprint, and per-node metadata — which the monitoring dashboard uses for network visualization and change detection.

---

## Metrics

### Deterministic

```python
from agentstax_eval.metrics import exact_match, contains_answer, json_valid
```

| Metric | What it checks | Source | LLM required |
|--------|---------------|--------|--------------|
| [`exact_match`](src/agentstax_eval/metrics/exact_match.py#L13) | `answer.strip().lower() == expected_answer.strip().lower()` | [source](src/agentstax_eval/metrics/exact_match.py) | No |
| [`contains_answer`](src/agentstax_eval/metrics/contains_answer.py#L15) | `expected_answer` appears in `answer` | [source](src/agentstax_eval/metrics/contains_answer.py) | No |
| [`json_valid`](src/agentstax_eval/metrics/json_valid.py#L16) | `answer` parses as valid JSON | [source](src/agentstax_eval/metrics/json_valid.py) | No |

### LLM-as-Judge

Factory functions that take an `llm` callable and return a metric. The `llm` can be synchronous (`fn(prompt: str) -> str`) or asynchronous (`async def`).

```python
from agentstax_eval.metrics import llm_correctness, llm_relevance, llm_faithfulness, llm_completeness, llm_rubric
```

| Metric | Reads | What it scores | Prompt |
|--------|-------|---------------|--------|
| [`llm_correctness(llm=)`](src/agentstax_eval/metrics/llm_correctness.py#L49) | `question`, `expected_answer`, `answer` | Factual correctness against reference | [prompt](src/agentstax_eval/metrics/llm_correctness.py#L17) |
| [`llm_relevance(llm=)`](src/agentstax_eval/metrics/llm_relevance.py#L48) | `question`, `answer` | Whether answer addresses the question (reference-free) | [prompt](src/agentstax_eval/metrics/llm_relevance.py#L17) |
| [`llm_faithfulness(llm=)`](src/agentstax_eval/metrics/llm_faithfulness.py#L48) | `answer`, `context` | Whether claims are supported by context (RAG) | [prompt](src/agentstax_eval/metrics/llm_faithfulness.py#L17) |
| [`llm_completeness(llm=)`](src/agentstax_eval/metrics/llm_completeness.py#L51) | `question`, `expected_answer`, `answer` | Whether all key points are covered | [prompt](src/agentstax_eval/metrics/llm_completeness.py#L17) |
| [`llm_rubric(llm=, criteria=)`](src/agentstax_eval/metrics/llm_rubric.py#L51) | `question`, `answer` | User-defined criteria in plain English | [prompt](src/agentstax_eval/metrics/llm_rubric.py#L16) |

All use a six-point scale (`1.0`, `0.8`, `0.6`, `0.4`, `0.2`, `0.0`) with chain-of-thought reasoning.

### Custom Metrics

Any function with the signature `fn(dataset_row: dict) -> float` works as a metric. The function receives the full row dict — including `question`, `expected_answer`, `answer`, and any extra fields your tasks added (like `context`). Return a float between `0.0` and `1.0`. The function's `__name__` is used as the column name in results.

```python
# Deterministic metric — no LLM needed
def answer_is_concise(dataset_row: dict) -> float:
    return 1.0 if len(dataset_row["answer"].split()) <= 20 else 0.0

# Custom LLM metric — you control the prompt and parsing
def my_judge_metric(dataset_row: dict) -> float:
    prompt = f"Is this answer polite? Answer 1 or 0.\nAnswer: {dataset_row['answer']}"
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": prompt}],
    )
    return float(response.choices[0].message.content.strip())

evaluation = Evaluation(metrics=[exact_match, answer_is_concise, my_judge_metric])
```

When using factory functions or lambdas, set `__name__` to control the result column name:

```python
concise = llm_rubric(llm=judge, criteria="Two sentences or fewer.")
concise.__name__ = "rubric_concise"
```

---

## Providers

Built-in `fn(prompt: str) -> str` wrappers that handle client setup and set `judge_model` metadata automatically:

```python
from agentstax_eval.providers import openai_provider, anthropic_provider, google_provider

judge = openai_provider()                            # default: gpt-4.1, reads OPENAI_API_KEY
judge = anthropic_provider(model="claude-opus-4-6")  # reads ANTHROPIC_API_KEY
judge = google_provider(model="gemini-2.5-pro")      # reads GEMINI_API_KEY
```

### Custom Providers

A provider is any `fn(prompt: str) -> str`. To get automatic `judge_model` tracking in result metadata, set the attribute on the function:

```python
from openai import OpenAI

client = OpenAI()

def my_judge(prompt: str) -> str:
    response = client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": prompt}],
    )
    return response.choices[0].message.content

my_judge.judge_model = "openai/gpt-4o"
```

The `judge_model` string flows into `metadata.scoring.<metric_name>.judge_model` in saved results. If omitted, scoring metadata will simply not include it.

---

## Monitoring Dashboard

**[agentstax-eval-monitor](https://github.com/your-org/agentstax-eval-monitor)** is a companion real-time dashboard that watches a directory of agentstax-eval result files and provides:

- **Regression detection** — agents categorized as regressed, improved, or healthy based on metric deltas
- **Metric trend sparklines** — per-metric performance history at a glance
- **Per-agent deep dives** — zoomable line charts with threshold lines and metadata change markers
- **Agent network visualization** — interactive DAG of your multi-agent topology, with added/removed agents highlighted
- **Architecture drift detection** — visual timeline of when topology or metric config changed
- **Live updates** — WebSocket-powered, updates as new result files land

### Setup

```bash
# Install
cd agentstax-eval-monitor
bun install

# Point at your results directory
bun run index.ts ./results
```

Opens a dashboard at `http://localhost:3000`.

---

## Core API

### Dataset

Wraps a list of dicts. Each row must have a `question`. The `expected_answer` field is optional at the dataset level — metrics that need it raise `MissingFieldError` at scoring time.

```python
dataset = Dataset([
    {"question": "What year did WWII end?", "expected_answer": "1945"},
])
```

### Task

Wraps a callable `fn(dataset_row: dict) -> dict`. The returned dict is merged into each row.

```python
task = Task(get_answer)
answered_dataset = task.run(dataset)  # returns new Dataset, original not mutated
```

### Evaluation

Holds a list of metrics and scores a dataset. Metrics are `fn(dataset_row: dict) -> float`.

```python
evaluation = Evaluation(metrics=[exact_match, llm_correctness(llm=my_judge)])
results = evaluation.run(answered_dataset, metadata={"model": "gpt-4o"})
```

### Pipeline

Convenience wrapper: runs tasks sequentially, then scores. Accepts `agent` for auto-extraction and `cache` for LLM judge caching.

```python
pipeline = Pipeline(
    dataset=dataset,
    tasks=[Task(get_answer)],
    evaluation=Evaluation(metrics=[llm_correctness(llm=my_judge)]),
    agent=my_agent,                # optional: auto-extract topology
    cache=DiskCache(".cache"),     # optional: cache judge responses (requires agent)
    metadata={"experiment": "v2"}, # optional: merged into results (overrides extracted values)
)
results = pipeline.run()
```

| Parameter | Type | Description |
|---|---|---|
| `dataset` | `Dataset` | Input rows to process |
| `tasks` | `list[Task]` | Ordered task list, run sequentially |
| `evaluation` | `Evaluation` | Scores completed rows |
| `metadata` | `dict \| None` | Merged into result metadata; overrides extracted values |
| `agent` | `object \| None` | Agent object for [auto-extraction](#framework-auto-extraction) |
| `cache` | `Cache \| None` | LLM judge [cache](#caching); requires `agent` |

### EvaluationResults

```python
results.metadata         # dict with timestamp_utc, topology, scoring, etc.
results.rows             # list[ResultRow] — each has .data (dict) and .scores (dict)
results.assert_passing() # True if all thresholded metrics pass on every row
results.failures()       # list[Failure] — each row/metric pair below threshold

# Save and load
path = results.save(directory="results", base_filename="eval")
latest = EvaluationResults.load(directory="results", base_filename="eval")
```

---

## Framework Auto-Extraction

Pass an agent object from a supported framework and Pipeline automatically extracts its metadata and topology — no manual `metadata` dict needed.

Supported frameworks:

| Framework | Install |
|---|---|
| LangGraph | `pip install langgraph` |
| Google ADK | `pip install google-adk` |
| OpenAI Agents SDK | `pip install openai-agents` |
| CrewAI | `pip install crewai` |
| LlamaIndex | `pip install llama-index` |
| Microsoft Agent Framework | `pip install msaf` |

```python
pipeline = Pipeline(
    dataset=dataset,
    tasks=[Task(get_answer)],
    evaluation=Evaluation(metrics=[llm_correctness(llm=judge)]),
    agent=my_agent,  # just pass your agent
)
```

---

## Caching

Cache LLM judge responses to avoid redundant API calls. Cache keys include judge model, prompt, and agent fingerprint — the cache auto-invalidates when agent topology changes.

```python
from agentstax_eval import DiskCache, MemoryCache

# Persistent (JSON file, atomic writes, thread-safe)
cache = DiskCache(directory=".cache")

# In-process (lost when process exits, thread-safe)
cache = MemoryCache()
```

Pass to Pipeline with `agent` for automatic fingerprint extraction:

```python
pipeline = Pipeline(
    dataset=dataset,
    tasks=[Task(get_answer)],
    evaluation=Evaluation(metrics=[llm_correctness(llm=judge)]),
    agent=my_agent,
    cache=DiskCache(".cache"),
)

results = pipeline.run()
cache.stats()  # {"hits": 0, "misses": 3, "size": 3}

# Second run with same agent topology: all hits
results = pipeline.run()
cache.stats()  # {"hits": 3, "misses": 3, "size": 3}
```

---

## Async Support

All three core objects support `run_async(concurrency=N)` for concurrent row processing:

```python
import asyncio
from agentstax_eval import Pipeline, Task, Evaluation
from agentstax_eval.metrics import exact_match

async def call_agent(dataset_row: dict) -> dict:
    response = await async_client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": dataset_row["question"]}],
    )
    return {"answer": response.choices[0].message.content}

pipeline = Pipeline(
    dataset=dataset,
    tasks=[Task(call_agent)],
    evaluation=Evaluation(metrics=[exact_match]),
    agent=my_agent,
)

results = asyncio.run(pipeline.run_async(concurrency=5))
```

`Task.run_async()` and `Evaluation.run_async()` are also available for step-by-step use. Default concurrency is `10`.

---

## CI / Regression Testing

```python
# test_agent_quality.py
from agentstax_eval import Pipeline, Dataset, Task, Evaluation
from agentstax_eval.metrics import llm_correctness
from agentstax_eval.providers import openai_provider

def test_agent_passes_threshold():
    results = Pipeline(
        dataset=Dataset([
            {"question": "Capital of France?", "expected_answer": "Paris"},
            {"question": "What is 2 + 2?", "expected_answer": "4"},
        ]),
        tasks=[Task(call_my_agent)],
        evaluation=Evaluation(metrics=[llm_correctness(llm=openai_provider())]),
        agent=my_agent,
    ).run()

    assert results.assert_passing(), "\n".join(str(f) for f in results.failures())
```

Save results for trend tracking — the monitoring dashboard can watch the same directory for live regression visibility.

---

## Further Reading

- [LLM Judge Prompt Writing Guide](docs/llm-judge-prompt-guide.md) — rubric design, scoring scales, bias mitigation, and prompt examples.

---

## Contributing

Contributions are welcome. Please open an issue to discuss before submitting a PR.

```bash
git clone https://github.com/your-org/agentstax-eval.git && cd agentstax-eval
uv sync && uv run pytest --ignore=tests/e2e -q
```

---

## License

MIT — see [LICENSE](LICENSE) for details.
