Metadata-Version: 2.4
Name: pytest-imply
Version: 0.1.1
Summary: Pytest plugin for test implication — skip tests implied by stronger ones
Author-email: Dimitri Staessens <dimitri@ouroboros.rocks>
License-Expression: MIT
Project-URL: Homepage, https://codeberg.org/o7s/pytest-imply
Project-URL: Repository, https://codeberg.org/o7s/pytest-imply
Project-URL: Issues, https://codeberg.org/o7s/pytest-imply/issues
Keywords: pytest,testing,implication,monotonic,optimization
Classifier: Development Status :: 4 - Beta
Classifier: Framework :: Pytest
Classifier: Intended Audience :: Developers
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Classifier: Topic :: Software Development :: Testing
Requires-Python: >=3.8
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: pytest>=7.0
Dynamic: license-file

# pytest-imply

A pytest plugin for **test implication** — skip tests that are implied
by stronger ones.

## The problem

Parametrized test suites often have an ordered severity axis:

```
test_count[c100]   PASSED   2.58s
test_count[c500]   PASSED   4.62s
test_count[c1000]  PASSED   7.13s
test_count[c5000]  PASSED  27.45s
test_count[c10000] PASSED  52.90s
```

If `c10000` passes, the smaller counts will almost certainly pass
too — running them all wastes CI time.

## The solution

**pytest-imply** lets the developer declare implication relationships
between tests.  When a stronger test passes, weaker tests are
short-circuited with a synthetic PASSED result:

```
test_count[c10000] PASSED  52.90s
test_count[c5000]  IMPLIED (higher count passed)
test_count[c1000]  IMPLIED (higher count passed)
test_count[c500]   IMPLIED (higher count passed)
test_count[c100]   IMPLIED (higher count passed)
```

If the strongest test fails, the plugin works downward to find the
threshold — no time wasted on tests below the passing level.

## Installation

```
pip install pytest-imply
```

## Markers

### `monotonic` — parametrized implication

For tests parametrized along an ordered axis:

```python
@pytest.mark.monotonic("count")
@pytest.mark.parametrize("count", [100, 500, 1000, 5000, 10000])
def test_stress(count):
    run_workload(count)
```

The highest value runs first.  If it passes, all lower values are
implied.  If it fails, the next-highest runs, and so on.

Use `direction="asc"` to run the smallest first:

```python
@pytest.mark.monotonic("threshold", direction="asc")
@pytest.mark.parametrize("threshold", [0.01, 0.1, 0.5, 1.0])
def test_precision(threshold):
    assert error < threshold
```

### `implies` / `implied_by` / `implied_by_any` / `implied_by_all` — named tokens

For arbitrary implication relationships between tests:

```python
@pytest.mark.implies("full_stack_ok")
def test_integration():
    """Full stack test — if this passes, unit tests are implied."""
    ...

@pytest.mark.implied_by("full_stack_ok")
def test_unit():
    """Implied by passing integration test — no need to run."""
    ...
```

Use `implied_by_any` for OR semantics (implied if **any** token was
recorded):

```python
@pytest.mark.implied_by_any("tcp_ok", "udp_ok")
def test_loopback():
    """Implied if either transport passed."""
    ...
```

Use `implied_by_all` for AND semantics (implied only if **all** tokens
were recorded):

```python
@pytest.mark.implied_by_all("tcp_ok", "udp_ok")
def test_both_transports():
    """Implied only if both transports passed."""
    ...
```

## Ordering

The plugin builds a dependency graph from all implication
relationships and topologically sorts the test items using Kahn's
algorithm.  This guarantees that implying tests always run before
their implied dependents.  Tests without implication markers keep
their original collection order.

## Caveats

- **Transitive propagation**: when a test is implied (short-circuited),
  its `implies` tokens are still propagated, so downstream tests see
  them.  This means chains like A→B→C work as expected.

- **Stacked markers**: multiple markers of the same type on one test
  are all honoured.  For example, stacking two `@pytest.mark.implies`
  decorators records both sets of tokens.

- **Token namespacing**: tokens live in a single flat namespace.
  In large projects, use a prefix convention (e.g.,
  `"mymodule::token"`) to avoid accidental collisions between
  independently-authored test modules.

- **Fixtures**: when a test is implied, its fixtures **do not run**.
  If an implied test has a session- or module-scoped fixture whose
  side effects other tests depend on, those tests may break.  Only
  mark tests as implied when their fixtures are not needed by other
  tests.  The plugin warns about non-function-scoped fixtures on
  implied tests.  See [Suppressing fixture warnings](#suppressing-fixture-warnings)
  for ways to silence these when the fixtures are known to be safe.

- **pytest-xdist**: the plugin does not support parallel workers.
  Implication state is per-process.  When xdist is active with
  workers, implication logic is **automatically disabled** and a
  warning is emitted.  Use `-p no:imply` to suppress the warning.

- **Plugin interoperability**: when a test is implied, the plugin
  returns `True` from `pytest_runtest_protocol`, which tells pytest
  the item is fully handled.  Other plugins that wrap the test
  protocol (e.g., `pytest-cov`, `pytest-timeout`) will not see
  implied tests.  This is inherent to the short-circuit design.

- **Orphan tokens**: if an `implied_by` (or variant) references a
  token that no test provides via `implies`, a warning is emitted
  at collection time.

- **Dependency cycles**: if implication markers form a cycle, the
  plugin falls back to original collection order for the affected
  tests and emits a warning.

- **xfail interaction**: a test marked `@pytest.mark.xfail` with
  `strict=False` that passes (xpass) **does** record its `implies`
  tokens.  With `strict=True`, an xpass is treated as a failure and
  tokens are **not** recorded.

## Disabling implication

### `--no-imply`

Disable all implication logic and run every test:

```
pytest --no-imply
```

Use this for exhaustive nightly or release-gate runs to verify that
the developer's implication assumptions still hold.

### `imply_enabled` ini option

Disable implication via configuration instead of a CLI flag:

```toml
[tool.pytest.ini_options]
imply_enabled = false
```

## Suppressing fixture warnings

The plugin warns when an implied test uses a non-function-scoped
fixture, since that fixture's side effects will be skipped.  For
fixtures that are safe to skip (e.g. session-scoped `autouse`
directory creation), suppress the warning in two ways:

### `imply_ignore_fixtures` ini option (global)

List fixture names that should never trigger the warning:

```toml
[tool.pytest.ini_options]
imply_ignore_fixtures = ["_prepare_log_dir", "db_schema"]
```

### `@pytest.mark.imply_suppress` (per-test)

Suppress the warning for a specific test or test class:

```python
@pytest.mark.imply_suppress
@pytest.mark.monotonic("count")
@pytest.mark.parametrize("count", [100, 1000, 10000])
def test_stress(count, session_fixture):
    ...
```

Apply it at the class level to suppress for all tests in the class:

```python
@pytest.mark.imply_suppress
class TestSuite:
    @pytest.mark.monotonic("count")
    @pytest.mark.parametrize("count", [100, 1000])
    def test_a(self, count, session_fixture):
        ...
```

## How it works

1. **Collection** — `pytest_collection_modifyitems` builds the
   dependency graph and topologically sorts items.
2. **Execution** — `pytest_runtest_protocol` checks whether a test is
   implied; if so, it emits synthetic `TestReport` objects and
   returns `True`.
3. **Recording** — `pytest_runtest_makereport` records tokens and
   monotonic passes after a test succeeds.
4. **Reporting** — `pytest_report_teststatus` renders implied tests as
   `IMPLIED (reason)` with a cyan `i` marker.

## License

MIT
