Metadata-Version: 2.4
Name: qlro
Version: 0.13.0
Summary: Describe your quantum workload. We recommend where to run it.
Author-email: Yeonwoo Oh <official@stockfolio.ai>
Maintainer-email: Yeonwoo Oh <official@stockfolio.ai>
License: Apache-2.0
Project-URL: Homepage, https://github.com/linsletoh/qlro
Project-URL: Documentation, https://github.com/linsletoh/qlro/blob/main/WCPP_SPEC.md
Project-URL: Paper, https://doi.org/10.5281/zenodo.19785800
Project-URL: Simulator, https://qlro-three.vercel.app/simulator
Project-URL: Issues, https://github.com/linsletoh/qlro/issues
Keywords: quantum,benchmarking,recommendation,qiskit,metriq,wcpp
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Science/Research
Classifier: Topic :: Scientific/Engineering :: Physics
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.11
Description-Content-Type: text/markdown
Requires-Dist: pydantic>=2.5.0
Requires-Dist: qiskit>=1.0.0
Requires-Dist: qiskit-aer>=0.13.0
Requires-Dist: fpdf2>=2.8.0
Provides-Extra: server
Requires-Dist: fastapi>=0.104.0; extra == "server"
Requires-Dist: uvicorn>=0.24.0; extra == "server"
Requires-Dist: sqlalchemy>=2.0.0; extra == "server"
Provides-Extra: dev
Requires-Dist: pytest>=7.4.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.23.0; extra == "dev"
Requires-Dist: httpx>=0.25.0; extra == "dev"
Provides-Extra: braket
Requires-Dist: amazon-braket-sdk>=1.80.0; extra == "braket"
Provides-Extra: qiskit
Provides-Extra: reporting

# Qlro

**Show us your quantum circuit. We'll tell you where to run it.**

```python
from qiskit import QuantumCircuit
import qlro

qc = QuantumCircuit(4)
qc.h(0); qc.cx(0, 1); qc.cx(1, 2); qc.cx(2, 3)
qc.measure_all()

result = qlro.recommend(qc, category="chemistry")
result.primary      # → 'H2-2'
result.primary_fit  # → 0.8140
```

Qlro is a quantum device recommendation engine. Give it a circuit and a workload type, and it ranks every available quantum device by how well that device fits *your specific workload* — based on real benchmark data from [Metriq](https://metriq.info) (Unitary Foundation), not vendor marketing.

## Install

```bash
pip install qlro
```

Ships with a snapshot of the Metriq benchmark dataset. No API keys, no accounts, no internet required.

## How it works

1. **You have a quantum circuit** (Qiskit `QuantumCircuit` or OpenQASM string).
2. **You tell Qlro what kind of workload it is**: `chemistry`, `simulation`, `optimization`, or `ml`.
3. **Qlro scores every quantum device** across four capability axes:
   - **Γ (Connectivity)** — verified entanglement coverage across the chip
   - **Φ (Coherence)** — information survival over circuit depth
   - **F (Fidelity)** — per-operation accuracy
   - **T (Throughput)** — effective operations per second
4. **You get a ranked list** with scores, uncertainty bands, and the Metriq commit hash so everything is auditable.

The scoring framework is called [WCPP (Workload-Conditioned Physical Projection)](WCPP_SPEC.md). Every number comes from physics, not heuristics. See the [full specification](WCPP_SPEC.md) for the math, axioms, and proofs.

## What Qlro does NOT do

- **Does not run your circuit.** You run it yourself on IBM Quantum, AWS Braket, Quantinuum, etc.
- **Does not build or optimize circuits.** That's Classiq, Qiskit transpiler, etc.
- **Does not measure hardware.** That's [Metriq](https://metriq.info) / Unitary Foundation. We consume their data.
- **Does not hide uncertainty.** Every score shows what's measured vs. estimated vs. assumed.

## Auto-logging (0.6.0+)

Every quantum-circuit execution can flow into [qlro.io/accuracy](https://qlro.io/accuracy) automatically — no manual `log_outcome()` calls needed.

**AWS Braket** — monkey-patch the SDK once, then every `run()` + `result()` pair is instrumented:

```python
import qlro.autolog.braket as qlbraket
qlbraket.enable()

# existing Braket code continues to work unchanged
from braket.aws import AwsDevice
device = AwsDevice("arn:aws:braket:eu-north-1::device/qpu/iqm/Garnet")
task = device.run(circuit, shots=1000)
result = task.result()          # ← auto-posts (prediction, observation)
```

**Qiskit** — wrap a backend; the proxy forwards every attribute but instruments `.run()`:

```python
import qlro.autolog.qiskit as qlqiskit
backend = service.backend("ibm_fez")
backend = qlqiskit.wrap(backend)

job = backend.run(circuit, shots=1024)
result = job.result()           # ← auto-posts (prediction, observation)
```

By default the observed fidelity is the dominant-bitstring proxy (**Silver tier**), which works for any circuit. Pass a custom metric for rigorous **Gold-tier** submissions:

```python
def ghz_metric(counts):
    total = sum(counts.values())
    return (counts.get("0000", 0) + counts.get("1111", 0)) / total

backend = qlqiskit.wrap(backend, metric=ghz_metric)   # ← Gold-tier
```

Predictions and outcomes persist in a local SQLite cache at `~/.qlro/autolog.db`, so a Braket task submitted today and fetched tomorrow still gets auto-posted. Auto-logging is strictly best-effort — any network or parsing error is swallowed and logged, never raised into the user's workflow.

Install the Braket extra:

```bash
pip install qlro[braket]
```

Qiskit is already a base dependency.

## Command line

Four shell commands cover the common workflow without leaving the terminal:

```bash
# Daily — rank a circuit file
qlro recommend my_vqe.qasm --category chemistry
qlro recommend my_vqe.qasm --category optimization --all
cat my_vqe.qasm | qlro recommend - --json | jq '.rankings[0]'

# Pre-flight — check device snapshot freshness and drift
qlro doctor iqm_garnet
# Exit codes: 0 = healthy, 1 = stale (>7 days), 2 = drifted (>=2x from snapshot)

# Calibrate — recover cross-vendor RMSE 82-94% (paper §8.10.9)
qlro calibrate iqm_garnet \
  --ghz-fidelity 0.953 \
  --deep-ladder-fidelity 0.39
# Saves to ~/.qlro/calibrations/iqm_garnet.json

# Retrospective — Braket savings audit
pip install "qlro[braket]"
qlro braket-retro --days 30
```

`qlro --help` lists all subcommands; `qlro <cmd> --help` shows flags.

## Jupyter integration

Qlro auto-renders as an inline HTML table in Jupyter notebooks. For quick interactive use:

```python
%load_ext qlro.jupyter
%qlro my_circuit chemistry
```

See [`examples/vqe_h2_with_qlro.ipynb`](examples/vqe_h2_with_qlro.ipynb) for a complete walkthrough.

## Try the interactive simulator

Don't understand the problem Qlro solves? [Play the simulator](https://qlro-three.vercel.app/simulator) — a 5-minute browser game where you play as a quantum engineer under a paper deadline, with and without Qlro.

## Update benchmark data

```bash
python scripts/sync_metriq.py
```

Pulls the latest from the [metriq-data](https://github.com/unitaryfoundation/metriq-data) repository. Every recommendation is anchored to a specific Metriq commit for reproducibility.

## Development

```bash
git clone https://github.com/linsletoh/qlro.git
cd qlro
pip install -e ".[dev,server]"
pytest  # 107 tests
```

## Architecture

```
src/qlro/
├── scoring/          ← WCPP reference implementation
│   ├── physics.py    ← benchmark → physical value transforms
│   ├── axes.py       ← capability axis aggregation
│   ├── composition.py← workload-conditioned geometric mean
│   └── wcpp.py       ← qlro_fit() entry point
├── public_api.py     ← recommend(), log_outcome()
├── jupyter.py        ← %qlro magic for notebooks
├── runner/           ← Qiskit Aer experiment runner
├── comparison/       ← normalization pipeline
├── recommendation/   ← scoring + explanation engine
└── api.py            ← FastAPI web application
```

## Key documents

- [**WCPP_SPEC.md**](WCPP_SPEC.md) — Full technical specification of the scoring framework
- [**ROADMAP.md**](ROADMAP.md) — Product roadmap + positioning copy
- [**papers/wcpp-draft.md**](papers/wcpp-draft.md) — WCPP preprint v1.1 (published on Zenodo, [DOI: 10.5281/zenodo.19785800](https://doi.org/10.5281/zenodo.19785800))
- [**test_results/**](test_results/) — Validation records with external vendor cross-references

## Citation

If you use Qlro or WCPP in research, please cite the Zenodo preprint:

```bibtex
@misc{oh2026wcpp,
  author    = {Oh, Yeonwoo},
  title     = {{Workload-Conditioned Physical Projection: A Vendor-Neutral Framework for Quantum Device Selection}},
  year      = {2026},
  publisher = {Zenodo},
  version   = {1.1},
  doi       = {10.5281/zenodo.19785800},
  url       = {https://doi.org/10.5281/zenodo.19785800}
}
```

Latest version DOI: [**10.5281/zenodo.19785800**](https://doi.org/10.5281/zenodo.19785800) (v1.1, post-reviewer round-2 strengthening of v1.0: Table 5 weight cross-reference in §8.8b; §8.10.7 "opposing directions" reframed as structural sensitivity-profile claim; Γ-Φ collinearity given direct admission with permutation p-value and v1.2 orthogonalization commitment. Main empirical results unchanged: $N = 100$ cross-vendor CSE validation, stable-snapshot subset $r = 0.964$, full aggregate $r = 0.893$; Appendix D preliminary explorations).

Concept DOI (always resolves to the latest version): [**10.5281/zenodo.19601378**](https://doi.org/10.5281/zenodo.19601378).

Previous versions: v1.0 at [10.5281/zenodo.19726906](https://doi.org/10.5281/zenodo.19726906), v0.9 at [10.5281/zenodo.19707205](https://doi.org/10.5281/zenodo.19707205), v0.8 at [10.5281/zenodo.19678508](https://doi.org/10.5281/zenodo.19678508), v0.7 at [10.5281/zenodo.19650211](https://doi.org/10.5281/zenodo.19650211), v0.6 at [10.5281/zenodo.19622226](https://doi.org/10.5281/zenodo.19622226) (archived).

## License

Apache 2.0
