Metadata-Version: 2.4
Name: pytest-pyeval
Version: 0.4.0
Summary: pytest plugin integrating pydantic-evals
Keywords: evals,pytest,pydantic
Author: Alex Ward
Author-email: Alex Ward <alxwrd@googlemail.com>
License-Expression: MIT
Classifier: Intended Audience :: Developers
Classifier: Operating System :: OS Independent
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Topic :: Software Development :: Quality Assurance
Classifier: Topic :: Software Development :: Testing
Classifier: Topic :: Software Development :: Libraries
Requires-Dist: pytest>=8.0
Requires-Dist: pydantic-evals>=1.67
Requires-Dist: logfire ; extra == 'logfire'
Requires-Python: >=3.10
Project-URL: Repository, https://github.com/alxwrd/pytest-pyeval
Project-URL: Releases, https://github.com/alxwrd/pytest-pyeval/releases
Provides-Extra: logfire
Description-Content-Type: text/markdown

<div align="center">
    <h1><code>pytest-pyeval</code></h1>
    <p align="center"><i>
        A <code>pytest</code> plugin integrating <code>pydantic-evals</code>
    </i></p>
    <img width="256px" src="https://raw.githubusercontent.com/alxwrd/pytest-pyeval/refs/heads/main/.github/assets/wizard-768.png">
    <div align="center">
        <a href="https://github.com/alxwrd/pytest-pyeval/actions/workflows/test.yml"><img src="https://img.shields.io/github/actions/workflow/status/alxwrd/pytest-pyeval/test.yml?branch=main&label=main"></a>
        <a href="https://pypi.python.org/pypi/pytest-pyeval"><img src="https://img.shields.io/pypi/v/pytest-pyeval.svg"></a>
        <a href="https://github.com/alxwrd/pytest-pyeval/blob/main/LICENCE"><img src="https://img.shields.io/pypi/l/pytest-pyeval.svg?"></a>
    </div>

Run [evals](https://ai.pydantic.dev/evals/) via
[pytest](https://docs.pytest.org/en/stable/) with the power of fixtures and
using a familiar Arrange, Act, Evaluate pattern.
</div>


## Example

```python
from pyeval import dataset, execute, Case, EqualsExpected, Contains


def uppercase_text(text: str) -> str:
    return text.upper()


@dataset(
    Case(
        name="uppercase_basic",
        inputs="hello world",
        expected_output="HELLO WORLD",
    ),
    Case(
        name="uppercase_with_numbers",
        inputs="hello 123",
        expected_output="HELLO 123",
    ),
)
def eval_uppercase(case: Case):
    result = execute(uppercase_text, case)

    result.evaluate(EqualsExpected())
    result.evaluate(Contains(value="HELLO", case_sensitive=True))
```

```plain
$ uv run pyeval

============================== test session starts ==============================
platform darwin -- Python 3.13.1, pytest-9.0.2, pluggy-1.6.0
plugins: anyio-4.12.1, pyeval-0.1.0
collected 2 items

tests/evals/eval_example.py ●●                                                                         [100%]

============================= 2 evaluated in 0.02s ==============================
```


## Installation

```shell
uv add --dev pytest-pyeval
```

## Running evals

`pytest-pyeval` keeps evals separate from your regular test suite. Evals are
excluded from `pytest` by default, since they are typically slower, hit live
APIs, and run on a different cadence to unit tests.

| Command | What runs |
|---|---|
| `pytest` | Regular tests only (`test_*.py`) |
| `pytest --evals` | Eval tests only (`eval_*.py`) |
| `pyeval` | Shorthand for `pytest --evals` |

```shell
pyeval                     # discover and run all evals in the project
pyeval evals/              # run evals under a specific path
pyeval evals/eval_foo.py   # run a single eval file
```
