Framework-neutral
Use PyTorch, Hugging Face Trainer, sklearn, TabPFN, PyCaret, XGBoost, or custom Python. pcq standardizes the surrounding evidence instead of replacing the training loop.
Apache-2.0 Python library
pcq turns ML experiments into agent-operable, reproducible units. Keep your existing training stack; pcq gives it a standard contract, artifact layout, validation surface, and final run record.
Why pcq
Use PyTorch, Hugging Face Trainer, sklearn, TabPFN, PyCaret, XGBoost, or custom Python. pcq standardizes the surrounding evidence instead of replacing the training loop.
JSON CLI surfaces, strictness gates, manifests, lineage, and run summaries are designed for coding agents, CI jobs, and services that need facts rather than prose.
Config, metrics, source identity, environment, inputs, artifacts,
validation, and best/last results converge into one
run_record.json.
Core contract
pcq keeps the model code free-form and makes the run boundary
explicit. The project declares intent in cq.yaml,
the script emits metrics, and pcq finalizes the standard artifacts.
cq.yaml declares command, config, metrics, inputs, and artifacts.pcq.log() emits structured metric history.pcq.save_all() writes the standard artifact set.pcq finalize turns a run directory into evidence.Agent-operable by design
pcq gives agents stable machine-readable surfaces while leaving policy to the agent or service. The library reports facts: what ran, what changed, what passed, what failed, and which artifacts exist.
Quickstart
uv add pcq
pcq init-experiment --style script --output ./my-exp --with-pyproject
cd ./my-exp
uv sync
pcq run --json
pcq validate-run output --json
pcq describe-run output --json
Relationship with CQ
Open-source authoring, validation, artifact, and run-evidence library.
Managed execution, queueing, artifact collection, dashboards, and agent loops.