Metadata-Version: 2.4
Name: crucihil
Version: 0.10.0
Summary: Bulletproof Hardware-in-the-Loop testing for firmware teams
Project-URL: Homepage, https://crucihil.io
Project-URL: Dashboard, https://app.crucihil.io
Project-URL: Source, https://github.com/crucihil/crucihil
Author-email: CruciHiL <hello@crucihil.io>
License: MIT
License-File: LICENSE
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Testing
Classifier: Topic :: System :: Hardware
Requires-Python: >=3.10
Requires-Dist: alembic
Requires-Dist: bcrypt>=4.0
Requires-Dist: cantools>=39.0.0
Requires-Dist: fastapi>=0.110
Requires-Dist: fastmcp>=2.14.7
Requires-Dist: httpx>=0.27
Requires-Dist: psycopg2-binary>=2.9
Requires-Dist: pydantic-settings>=2.0
Requires-Dist: pydantic>=2.0
Requires-Dist: python-can>=4.3.0
Requires-Dist: python-jose[cryptography]
Requires-Dist: python-multipart
Requires-Dist: pyyaml>=6.0
Requires-Dist: resend>=2.0.0
Requires-Dist: sqlalchemy>=2.0
Requires-Dist: tomli>=2.0
Requires-Dist: typer>=0.9
Requires-Dist: uvicorn
Requires-Dist: websockets>=12.0
Provides-Extra: dev
Requires-Dist: mypy; extra == 'dev'
Requires-Dist: pytest; extra == 'dev'
Requires-Dist: pytest-asyncio; extra == 'dev'
Requires-Dist: ruff; extra == 'dev'
Description-Content-Type: text/markdown

# CruciHiL

**Bulletproof, easy-to-use Hardware-in-the-Loop (HiL) testing for firmware teams.**

Write a test in Python. Run it against simulation before hardware exists. Deploy to real hardware with zero test changes. See results in CI/CD automatically. Ask AI what broke and why.

---

## Why CruciHiL

Legacy HiL tools (dSPACE, NI, Vector) are expensive, slow to configure, and hostile to modern dev workflows. CruciHiL is built for teams that move fast:

- **Python-first** — no proprietary scripting languages, full IDE support
- **Simulation-to-hardware parity** — same test file, swap a TOML config
- **CI/CD native** — runs headless, produces JUnit XML, integrates with GitHub Actions
- **AI-powered analysis** — MCP server connects Claude/GPT directly to test results and signal traces

---

## Architecture

```
Layer 6 — Interfaces        Web Dashboard · CLI · CI/CD webhooks
Layer 5 — AI Interface      MCP Server (FastMCP) — 11 tools, vendor-agnostic
Layer 4 — Cloud Control     FastAPI + PostgreSQL — orchestration and history
Layer 3 — Local Agent       Test runner · YAML executor · result reporter
Layer 2 — Rig HAL           rig.can / rig.sim / rig.someip / rig.doip / rig.ecu
Layer 1 — Hardware          CAN · Ethernet · GPIO · Power · ECUs
```

Test code only ever touches Layer 2. Hardware details live in TOML config, never in test code.

---

## Installation

```bash
pip install crucihil
```

Or from source:

```bash
git clone <repo>
cd crucihil
python -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
```

---

## Quick start (no hardware needed)

```bash
# 1. Generate a runnable example
crucihil scaffold --example hello_world

# 2. Run it immediately — virtual backend, no hardware required
cd examples/hello_world
crucihil run --suite suites/hello.yaml --rig rigs/virtual.toml -v
```

Three built-in examples demonstrate the full framework:

| Example | What it shows |
|---|---|
| `hello_world` | Minimal install check — two tests, no DBC |
| `can_signals` | BSE simulation + DBC + `rig.can.expect()` assertions |
| `fault_injection` | FaultDescriptor pattern, `sim.override()`, power cycle |

---

## CLI Reference

```
crucihil --help

Commands:
  version     Show CruciHiL version
  run         Run a test suite against a rig
  scaffold    Generate a test project, runnable example, or backend adapter stub
  init        Interactive wizard — create a rig TOML, optionally register with cloud
  discover    AI-assisted rig setup (probes hardware, generates TOML)
  agent       Start the persistent local agent daemon
  deregister  Remove a rig's saved API key from local credentials
```

---

### `crucihil run`

Run a YAML test suite against a rig TOML.

```bash
crucihil run --suite suites/engine.yaml --rig rigs/virtual.toml
crucihil run --suite suites/engine.yaml --rig rigs/my_bench.toml --verbose
crucihil run --suite suites/engine.yaml --rig rigs/my_bench.toml \
  --output results.xml --html results.html
```

**Options:**

```
--suite, -s  PATH       Path to YAML suite manifest (required)
--rig,   -r  PATH       Path to rig TOML config (required)
--output,-o  PATH       Write JUnit XML results here
--html       PATH       Write self-contained HTML report here
--tags       TAGS       Comma-separated tags — only run matching tests
--suite-type TYPES      Comma-separated suite types — only run matching tests
--verbose,-v            Show per-test status and debug logs
```

**Exit codes:** `0` = all passed, `1` = one or more failed, `2` = framework error.

**Filtering examples:**

```bash
# Run only tests tagged 'smoke'
crucihil run --suite suites/regression.yaml --rig rigs/virtual.toml --tags smoke

# Run only regression suite-type tests
crucihil run --suite suites/all.yaml --rig rigs/my_bench.toml --suite-type regression

# Combine: smoke tests on a specific interface
crucihil run --suite suites/all.yaml --rig rigs/my_bench.toml --tags smoke,can
```

**Module resolution:** `crucihil run` adds the current directory and the suite file's parent directory to `sys.path`, so `module: tests.smoke` in your YAML resolves against your project root automatically. No `PYTHONPATH` setup needed.

---

### `crucihil scaffold`

Three modes in one command.

#### Mode 1 — Test project from a rig TOML

```bash
crucihil scaffold --rig rigs/my_rig.toml
crucihil scaffold --rig rigs/my_rig.toml --output-dir /path/to/project
```

Reads the TOML, discovers what hardware is configured, and generates a runnable test project:

```
suites/smoke.yaml       — quick health checks, one per hardware section
suites/regression.yaml  — full coverage suite
tests/__init__.py
tests/smoke.py          — documented Python stubs with rig.can.expect() patterns
tests/regression.py     — regression stubs with fault injection patterns
```

Power and GPIO control is placed in YAML `setup:`/`teardown:` steps (declarative). Python functions contain only assertions.

#### Mode 2 — Runnable examples (no hardware required)

```bash
crucihil scaffold --example hello_world     # minimal install check
crucihil scaffold --example can_signals     # BSE simulation + signal assertions
crucihil scaffold --example fault_injection # FaultDescriptor + sim.override()
```

Each example includes a virtual rig TOML, YAML suite, Python test file, and (for can/fault examples) an example DBC. Run immediately with no setup:

```bash
cd examples/can_signals
crucihil run --suite suites/can_signals.yaml --rig rigs/virtual.toml -v
```

#### Mode 3 — Custom backend adapter stub

```bash
crucihil scaffold --adapter power --name RelayBoard
crucihil scaffold --adapter can   --name PeakUSB
crucihil scaffold --adapter gpio  --name FTDIBoard
crucihil scaffold --adapter doip  --name MyDoIPGW
crucihil scaffold --adapter someip --name VSomeIPProxy
crucihil scaffold --adapter udp   --name SensorStream
crucihil scaffold --adapter uds   --name CANIsotpClient
```

Generates a Python class stub implementing the HAL ABC for that backend type. All abstract methods are stubbed with docstrings describing what each must do. The command also prints the exact TOML snippet for wiring it in:

```bash
$ crucihil scaffold --adapter power --name RelayBoard

  wrote relay_board_backend.py

Reference it in your rig TOML:

  [rig.power.ecu_main]
  backend = "mypackage.relay_board.RelayBoardBackend"
  default = "off"
```

**`--output-dir`** (all modes): directory to write files into. Defaults to `.`.

---

### `crucihil init`

Interactive wizard — creates a validated rig TOML and optionally registers the rig with the cloud.

```bash
crucihil init
crucihil init --output-dir rigs/
```

**What it does:**

1. Asks for rig name and platform
2. Asks: **virtual simulation** or **real hardware?**
   - **Virtual** — generates a complete working TOML instantly, no further prompts
   - **Hardware** — walks through each section:
     - CAN interfaces (auto-detected from `ip link`), bitrate presets (125k / 250k / 500k / 1M / 2M / 5M), FD mode, backend
     - Ethernet interfaces for DoIP/SOME/IP
     - Power rails — name each rail (e.g. `12v_supply`, `5v_logic`), pick backend, supports multiple
     - ECUs — name each ECU, set logical address, transport (DoIP/CAN-ISOtp), power rail reference, supports multiple
     - DBC file path
3. Shows the generated TOML for review
4. Asks for confirmation before writing
5. Optionally registers with the cloud (email + password login, saves API key to `~/.crucihil/credentials.toml`)

After `init`, run `crucihil scaffold --rig rigs/<name>.toml` to get a test project.

---

### `crucihil discover`

AI-assisted rig setup — probes the system and generates a TOML.

```bash
crucihil discover
crucihil discover --no-ai           # stub TOML from probe results, no API key needed
crucihil discover --provider openai
crucihil discover --provider gemini
crucihil discover --describe "Orin NX, two CAN buses on can0/can1, DoIP on eth0"
crucihil discover --model claude-haiku-4-5-20251001
```

**Options:**

```
--output-dir, -d  PATH   Directory to write generated TOML (default: rigs/)
--provider,   -p  NAME   AI provider: 'anthropic', 'openai', or 'gemini' (auto-detected from env)
--describe        TEXT   Hardware description passed to AI (skips interactive prompt)
--no-ai                  Skip AI, generate stub TOML from probe results only
--model           NAME   AI model override (default: claude-sonnet-4-6 / gpt-4o / gemini-2.0-flash)
```

API key lookup order: `ANTHROPIC_API_KEY` → `OPENAI_API_KEY` → `GOOGLE_API_KEY` → interactive prompt.

Generated TOML is validated against the `RigConfig` schema before write. Validation errors are shown as warnings — you can still write and edit manually.

---

### `crucihil agent`

Runs on the bench machine. Connects to the cloud control plane via WebSocket, receives test run commands, streams results back.

```bash
crucihil agent --rig rigs/my_bench.toml
crucihil agent --rig rigs/my_bench.toml --verbose
```

**Options:**

```
--rig,   -r  PATH   Path to rig TOML config (required)
--cache      PATH   SQLite result cache path (default: ~/.crucihil/results.db)
--verbose,-v        Enable debug logging
```

**First-boot auto-registration:** if `[rig.cloud]` contains a `registration_token` but no `api_key`, the agent registers itself on first boot, saves the key to `~/.crucihil/credentials.toml`, and connects. No manual steps needed.

```toml
[rig.cloud]
url                = "https://crucihil-server.fly.dev"
registration_token = "your-token-here"
# api_key is written automatically after first boot
```

Without `[rig.cloud]`, the agent runs in local-only mode (no cloud sync).

---

### `crucihil deregister`

Remove a rig's saved API key from the local credentials store. Run this after deleting a rig from the dashboard.

```bash
crucihil deregister my_rig
crucihil deregister my_rig --server https://crucihil-server.fly.dev
```

**Options:**

```
RIG_NAME    (positional) Rig name to remove from ~/.crucihil/credentials.toml
--server    URL          Server URL the rig was registered with (default: cloud server)
```

---

## Writing Tests

Test functions are plain async Python. The `rig` object is injected by the framework — never constructed in test code.

```python
from crucihil.hal.rig import Rig
from crucihil.hal.models.exceptions import BlockedError

async def test_engine_startup(rig: Rig, expected_rpm: float = 800.0) -> None:
    result = await rig.can.expect(
        signal="EngineData.RPM",
        condition=lambda v: v > expected_rpm,
        timeout=2.0,
    )
    assert result.passed, result.fail_msg
```

Switch from virtual to real hardware: change `--rig rigs/virtual.toml` to `--rig rigs/my_bench.toml`. The test is unchanged.

**Status rules:**
- `assert` fails → `status = "fail"` — firmware bug, counts against pass rate
- `raise BlockedError("msg")` → `status = "blocked"` — precondition failed, does NOT count against pass rate
- Clean return → `status = "pass"`

### Rig HAL API

```python
# CAN
await rig.can.send(message="EngineControl", fields={"Throttle": 50.0})
result = await rig.can.expect(signal="EngineData.RPM", condition=lambda v: v > 800, timeout=2.0)

# Simulation (virtual backend)
await rig.sim.set("EngineData.RPM", 2500.0)
rig.sim.start("EngineData")              # start BSE cyclic transmission
rig.sim.stop("EngineData")
async with rig.sim.override("EngineData.RPM", 5500.0):
    ...                                  # value restored on exit, even on exception

# Fault injection — FaultDescriptor pattern (NOT a coroutine)
async with rig.fault.inject(rig.fault.can_dropout(arb_id=0x100, duration=1.0)):
    await asyncio.sleep(1.0)
async with rig.fault.inject(rig.fault.power_cycle(rail="ecu_main", off_duration=0.5)):
    await asyncio.sleep(0.5)

# ECU diagnostics (DoIP)
response = await rig.ecu["ecu_main"].uds.ecu_reset(reset_type=0x01)
assert response.positive, f"ECU reset failed: {response}"
```

### Suite YAML format

Tests are declared in YAML — hardware setup, metadata, and filtering. Python functions contain only assertions.

```yaml
suite:
  name: engine_validation
  version: "1.0.0"

defaults:
  timeout: 30.0
  suite_types: [regression]

tests:
  - id: engine_startup
    name: Engine startup
    tags: [smoke, engine]
    priority: critical           # critical / high / medium / low
    depends_on: []               # skip if any listed test failed
    suite_types: [smoke, regression]
    setup:
      - power.on: ecu_main
      - sim.set:  { signal: "EngineData.RPM", value: 0.0 }
      - sim.start: EngineData
    teardown:
      - sim.stop: EngineData
      - power.off: ecu_main
    module: tests.engine         # dotted module path from project root
    function: test_engine_startup
    params:
      expected_rpm: 800.0        # forwarded as kwargs to the function
```

**YAML setup/teardown actions:**

```yaml
- sim.set:    { signal: "Msg.Sig", value: 0.0 }
- sim.start:  MessageName
- sim.stop:   MessageName
- sim.start_all:               # start all configured messages
- power.on:   rail_name
- power.off:  rail_name
- gpio.set:   { pin: ignition_enable, value: true }
```

---

## Rig TOML format

Hardware details go in TOML, never in test code.

```toml
[rig]
name         = "my_bench"
platform     = "orin_nx"
spec_version = "1.0"
backend      = "hardware"       # or "virtual" for simulation

[rig.can.can0]
interface = "can0"
bitrate   = 500000
fd        = false
backend   = "socketcan"         # socketcan / peak / virtual / <module.path.ClassName>

[rig.ethernet.eth0]
interface      = "eth0"
ip             = "169.254.0.1"
someip_backend = "python-someip"
doip_backend   = "python-doip"

[rig.power.ecu_main]
backend = "gpio_relay"          # virtual_power / gpio_relay / bench_psu / <module.path.ClassName>
default = "off"
gpio_pin = 17                   # for gpio_relay backend

[rig.gpio]
ignition_enable = { pin = 22, direction = "out", default = false, backend = "linux_gpio" }

[rig.ecus.ecu_main]
name            = "Main ECU"
logical_address = 0x0001
transport       = "doip"        # doip or can_isotp
doip_interface  = "eth0"
power_rail      = "ecu_main"    # optional — links power control to this ECU
boot_timeout    = 10.0          # seconds — hardware-specific, never in test code

[rig.definitions]
can_dbc = "defs/vehicle_can.dbc"

[rig.cloud]
url = "https://crucihil-server.fly.dev"
# api_key is stored in ~/.crucihil/credentials.toml after first boot
```

---

## Custom Hardware Backends

CruciHiL ships virtual and common reference backends. For custom hardware, implement the relevant ABC:

```bash
# Generate a stub for any backend type
crucihil scaffold --adapter power --name RelayBoard   # → relay_board_backend.py
crucihil scaffold --adapter can   --name MyUSBAdaptor
crucihil scaffold --adapter gpio  --name FTDIBoard
```

```python
# relay_board_backend.py
from crucihil.hal.backends.base import AbstractPowerBackend

class RelayBoardBackend(AbstractPowerBackend):
    async def connect(self) -> None: ...
    async def disconnect(self) -> None: ...
    async def on(self) -> None: ...
    async def off(self) -> None: ...
    async def read_voltage(self) -> float: return 12.0
    async def set_voltage(self, voltage: float) -> None: ...
```

Reference it in your TOML:

```toml
[rig.power.ecu_main]
backend = "mypackage.relay_board_backend.RelayBoardBackend"
```

The class is loaded via `importlib` at rig connect time. Any package on `sys.path` works.

**Adapter types:** `can` · `power` · `gpio` · `doip` · `someip` · `udp` · `uds`

---

## Supported Hardware Backends

| Bus | Backends |
|---|---|
| CAN | `socketcan` (Linux), `peak` (PEAK PCAN-USB), `virtual` |
| SOME/IP | `vsomeip`, `python-someip` (virtual) |
| DoIP | `python-doip`, `virtual` |
| Power | `gpio_relay`, `bench_psu`, `virtual_power` |
| GPIO | `linux_gpio`, `virtual_gpio` |
| Custom | `<dotted.module.path.ClassName>` via importlib |

---

## Complete Rig Setup Workflow

### New project with virtual simulation

```bash
# 1. Install
pip install crucihil

# 2. Create a virtual rig
crucihil init            # choose mode [1] Virtual

# 3. Generate test project
crucihil scaffold --rig rigs/my_rig.toml

# 4. Run
crucihil run --suite suites/smoke.yaml --rig rigs/my_rig.toml -v
```

### Bringing a real bench machine online

```bash
# On the bench machine
pip install crucihil

# Run the setup wizard — mode [2] Hardware
crucihil init

# At the end, answer y to cloud registration
# Provide your app.crucihil.io email + password when prompted

# Start the agent
crucihil agent --rig rigs/<name>.toml
```

The rig appears as **connected** in the dashboard within seconds.

### Production deployment (systemd)

```bash
sudo ./scripts/install-agent.sh \
  --rig rigs/my_bench.toml \
  --server https://crucihil-server.fly.dev \
  --key <api-key>

systemctl status crucihil-agent@my-bench
journalctl -u crucihil-agent@my-bench -f
```

---

## Cloud Dashboard

Available at `https://app.crucihil.io`.

### First-time setup (self-hosted)

```bash
curl -X POST https://your-server/api/v1/setup \
  -H 'Content-Type: application/json' \
  -d '{"org_name":"Acme","admin_email":"you@company.com","admin_password":"strong-password"}'
```

Returns a JWT. Log in at the dashboard with the same email and password.

### Inviting team members

Admins: **Settings → Team → Invite member**. Members can view results and trigger runs; admins can also manage rigs and users.

### Connecting an AI client (MCP)

Add to Claude Desktop `claude_desktop_config.json`:

```json
{
  "mcpServers": {
    "crucihil": {
      "url": "https://crucihil-mcp.fly.dev/sse",
      "headers": {
        "Authorization": "Bearer <your-api-key>"
      }
    }
  }
}
```

| MCP Tool | What it does |
|---|---|
| `list_rigs` | List rigs with online/offline status |
| `get_rig_config` | Hardware summary for one rig |
| `list_runs` | Query run history |
| `get_run_summary` | Pass/fail counts and status for one run |
| `run_test_suite` | Trigger a test suite on a connected rig |
| `cancel_run` | Cancel an active run |
| `get_results` | Per-test results (filterable by status) |
| `get_signal_trace` | Signal telemetry recorded during a run |
| `describe_failure` | Full failure context in one call — errors + signals + logs |
| `list_signals` | Parse a DBC and return all signal names |
| `list_tests` | Parse a YAML manifest and return test metadata |
| `generate_test_suite` | Scaffold YAML + Python stubs; pass `context_items` for AI-generated assertions |

---

## Self-Hosting with Docker

```bash
cp .env.example .env       # set POSTGRES_PASSWORD and SECRET_KEY
./setup.sh                 # bootstrap containers + first-run migration
./dev.sh                   # start dev server (hot reload) at localhost:5173
```

```bash
./setup.sh --status        # service health
./setup.sh --restart       # restart containers
./dev.sh --rig rigs/my_bench.toml   # register + start a native agent
```

### Deploy to Fly.io + Vercel

```bash
fly deploy --config fly.server.toml   # control plane
fly deploy --config fly.mcp.toml      # MCP server
# Dashboard auto-deploys to Vercel on push to main
```

Set secrets (never in toml files):

```bash
fly secrets set SECRET_KEY="..." REGISTRATION_TOKEN="..." RESEND_API_KEY="re_..." \
  --app crucihil-server
```

---

## Project Structure

```
crucihil/
├── hal/          Layer 2: Rig HAL (backends, BSE, namespaces, config)
├── agent/        Layer 3: Test runner, agent daemon, SQLite cache, init wizard
├── server/       Layer 4: FastAPI control plane + PostgreSQL
├── mcp/          Layer 5: MCP server (11 tools, FastMCP 3.x)
└── cli/          Layer 6: CLI entry point

rigs/             Rig TOML configs (hardware details — never in test code)
tests/
├── unit/         Unit tests
└── integration/  Integration tests against virtual rig
scripts/
├── release.sh         Cut a release: ./scripts/release.sh 0.5.0
└── install-agent.sh   Install agent as systemd service on a bench machine
```

---

## Cutting a Release

```bash
./scripts/release.sh 0.5.0
```

Bumps `pyproject.toml`, commits, tags `v0.5.0`, pushes. GitHub Actions publishes to PyPI and creates a GitHub Release automatically.

---

## Requirements

- Python 3.10+
- Linux required for real hardware (SocketCAN, GPIO)
- vsomeip library for SOME/IP hardware backend
- python-doip for DoIP hardware backend
- PEAK drivers for PEAK PCAN adapters
