Auto Test#
PlestyLib includes lightweight auto-test helpers to quickly validate a device API implementation.
These helpers are intended for development-time smoke testing, not full regression testing.
Available Auto Tests#
There are two utilities:
Config auto test: validates registered config parameters (read-only, write-only, read-write).
Function auto test: validates registered operations in the function system.
1) Config Auto Test (Synchronous Device API)#
Utility location:
plestylib.test.auto_test_sync_device.auto_test
What it does:
Opens the device with context manager.
Prints identity.
Queries all read-only configs.
Writes all write-only configs using defaults or dtype-based fallback values.
Writes and re-queries read-write configs.
Example:
from plestylib.test.auto_test_sync_device import auto_test
from powermeter_device import PowermeterDevice
auto_test(
PowermeterDevice,
"USB0::0x1313::0x8078::P0000001::INSTR",
sensor_type="S155C",
)
Notes:
For read-write parameters without default value, current queried value is written back.
This avoids guessing unsupported values when device constraints are incomplete.
2) Function Auto Test (Operation/Func System)#
Utility location:
plestylib.test.auto_test_func_system.auto_test
What it does:
Opens the device and optionally queries identity.
Enumerates registered operations from
device._functions.Generates valid payloads from
FuncParammetadata.Calls each operation and checks response shape.
Prints pass/fail summary.
By default it can use a mock operation solver so operation validation can run without real backend behavior.
Example (mock solver mode):
from plestylib.test.auto_test_func_system import auto_test
from my_device import MyDevice
auto_test(
MyDevice,
"device-id-or-address",
use_mock_solver=True,
seed=123,
)
Example (real solver mode):
from plestylib.test.auto_test_func_system import auto_test
from my_device import MyDevice
auto_test(
MyDevice,
"device-id-or-address",
use_mock_solver=False,
ignore_ops=["dangerous_operation"],
)
Important Parameters (Function Auto Test)#
use_mock_solver: When true, binds an internal mock solver that generates output-shaped responses from metadata.ignore_ops: List of operation names to skip.seed: Seed for deterministic test payload generation.sleep_time: Delay between operation calls.
Output Interpretation#
Function auto test prints one line per operation:
[PASS] op_name payload=... response=...[FAIL] op_name payload=... error=...
Then a summary:
Passed count
Failed count
Passed operation list
Recommended Workflow#
Start with config auto test after registering parameters.
Add function auto test after registering operations.
Run function auto test first with
use_mock_solver=Trueto validate schemas.Then switch to
use_mock_solver=Falsefor end-to-end backend checks.Add operation names to
ignore_opsfor stateful or destructive actions.
Limitations#
These helpers are print-based and not pytest assertions by default.
They do not replace integration tests for timing, hardware state transitions, or long-running sequences.
Mock mode validates interface contracts, not real protocol/device behavior.
Using with pytest#
You can leverage auto-test utilities inside pytest suites in two ways.
1. Smoke-test style (no exception means pass)#
from plestylib.test.auto_test_sync_device import auto_test as auto_test_sync
from plestylib.test.auto_test_func_system import auto_test as auto_test_func
from my_device import MyDevice
def test_config_auto():
auto_test_sync(MyDevice, "device-id-or-address")
def test_function_auto_mock():
auto_test_func(MyDevice, "device-id-or-address", use_mock_solver=True, seed=123)
2. Assertion style with pytest output capture#
Because these helpers currently print summary lines, use pytest capsys to capture stdout and assert summary text.
from plestylib.test.auto_test_func_system import auto_test
from my_device import MyDevice
def test_no_failed_operations(capsys):
auto_test(MyDevice, "device-id-or-address", use_mock_solver=True, seed=123)
captured = capsys.readouterr()
assert "Failed: 0" in captured.out, captured.out
Recommendation#
For stronger tests, consider extending auto-test helpers to return a summary dictionary (for example passed/failed counts) in addition to printing. Then pytest assertions can be done directly without parsing text output.