Metadata-Version: 2.4
Name: eval-adapter
Version: 0.1.1
Summary: Unified rubric and run adapter for common eval frameworks.
Author: AuraOne
License-Expression: MIT
Project-URL: Homepage, https://auraone.ai/open
Project-URL: Source, https://github.com/auraoneai/eval-adapter
Classifier: Development Status :: 3 - Alpha
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: rubric-spec>=0.1.1
Requires-Dist: PyYAML>=6.0
Dynamic: license-file

# eval-adapter

`eval-adapter` lets one rubric-spec v1 rubric and run config drive synthetic-compatible runs across Inspect AI, LM Eval Harness, OpenAI Evals, PromptFoo, DeepEval, LangSmith exports, and Phoenix exports.

## Quickstart

```bash
pip install eval-adapter
eval-adapter run --config examples/unified_config_sample.yaml --runner all
```

The runner modules normalize each framework shape into one `EvalRunResult` containing
`item_count`, per-criterion scores, weights, weighted scores, and source metadata.
LangSmith and Phoenix exports can be imported from their feedback/trace shapes; see
`examples/sample_exports.json`.

## What This Is Not

This is not a hosted eval platform and includes no paid or customer data.
