Metadata-Version: 2.4
Name: crucihil
Version: 0.5.0
Summary: Bulletproof Hardware-in-the-Loop testing for firmware teams
Project-URL: Homepage, https://crucihil.io
Project-URL: Dashboard, https://app.crucihil.io
Project-URL: Source, https://github.com/crucihil/crucihil
Author-email: CruciHiL <hello@crucihil.io>
License: MIT
License-File: LICENSE
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Testing
Classifier: Topic :: System :: Hardware
Requires-Python: >=3.10
Requires-Dist: alembic
Requires-Dist: bcrypt>=4.0
Requires-Dist: cantools>=39.0.0
Requires-Dist: fastapi>=0.110
Requires-Dist: fastmcp>=2.14.7
Requires-Dist: httpx>=0.27
Requires-Dist: psycopg2-binary>=2.9
Requires-Dist: pydantic-settings>=2.0
Requires-Dist: pydantic>=2.0
Requires-Dist: python-can>=4.3.0
Requires-Dist: python-jose[cryptography]
Requires-Dist: python-multipart
Requires-Dist: pyyaml>=6.0
Requires-Dist: resend>=2.0.0
Requires-Dist: sqlalchemy>=2.0
Requires-Dist: tomli>=2.0
Requires-Dist: typer>=0.9
Requires-Dist: uvicorn
Requires-Dist: websockets>=12.0
Provides-Extra: dev
Requires-Dist: mypy; extra == 'dev'
Requires-Dist: pytest; extra == 'dev'
Requires-Dist: pytest-asyncio; extra == 'dev'
Requires-Dist: ruff; extra == 'dev'
Description-Content-Type: text/markdown

# CruciHiL

**Bulletproof, easy-to-use Hardware-in-the-Loop (HiL) testing for firmware teams.**

Write a test in Python. Run it against simulation before hardware exists. Deploy to real hardware with zero test changes. See results in CI/CD automatically. Ask AI what broke and why.

---

## Why CruciHiL

Legacy HiL tools (dSPACE, NI, Vector) are expensive, slow to configure, and hostile to modern dev workflows. CruciHiL is built for teams that move fast:

- **Python-first** — no proprietary scripting languages, full IDE support
- **Simulation-to-hardware parity** — same test file, swap a TOML config
- **CI/CD native** — runs headless, produces JUnit XML, integrates with GitHub Actions
- **AI-powered analysis** — MCP server connects Claude/GPT directly to test results and signal traces

---

## Architecture

```
Layer 6 — Interfaces        Web Dashboard · CLI · CI/CD webhooks
Layer 5 — AI Interface      MCP Server (FastMCP) — 11 tools, vendor-agnostic
Layer 4 — Cloud Control     FastAPI + PostgreSQL — orchestration and history
Layer 3 — Local Agent       Test runner · YAML executor · result reporter
Layer 2 — Rig HAL           rig.can / rig.sim / rig.someip / rig.doip / rig.ecu
Layer 1 — Hardware          CAN · Ethernet · GPIO · Power · ECUs
```

Test code only ever touches Layer 2. Hardware details live in TOML config, never in test code.

---

## Installation

```bash
pip install crucihil
```

Or from source:

```bash
git clone <repo>
cd crucihil
python -m venv .venv && source .venv/bin/activate
pip install -e ".[dev]"
```

---

## CLI Reference

```
crucihil --help

Commands:
  version   Show CruciHiL version
  run       Run a test suite against a rig
  agent     Start the persistent local agent daemon
  init      Interactive wizard — create a rig TOML and register with cloud
  discover  AI-assisted rig setup (probes hardware, generates TOML)
```

### `crucihil run` — run tests locally

```bash
# Run against the virtual simulation rig (no hardware needed)
crucihil run --suite tests/suites/engine_validation.yaml --rig rigs/virtual.toml

# Run against real hardware
crucihil run --suite tests/suites/engine_validation.yaml --rig rigs/my_bench.toml

# With JUnit XML + HTML output
crucihil run --suite tests/suites/engine_validation.yaml --rig rigs/virtual.toml \
  --output results.xml --html results.html
```

### `crucihil init` — set up a new rig

The interactive wizard for configuring a new rig. Run this on the bench machine:

```bash
crucihil init
```

It will:
1. Auto-detect CAN and Ethernet interfaces (`ip link`)
2. Prompt for bitrate, FD mode, power backend, DBC path
3. Write a validated `rigs/<name>.toml`
4. Optionally register the rig with the cloud control plane (self-registers, saves API key)

After `crucihil init`, start the agent with no further config:

```bash
crucihil agent --rig rigs/<name>.toml
```

### `crucihil agent` — persistent agent daemon

Runs on the bench machine. Connects to the cloud via WebSocket, receives test run commands, streams results back.

```bash
crucihil agent --rig rigs/my_bench.toml
```

**First boot auto-registration:** if `[rig.cloud]` has a `registration_token` but no `api_key`, the agent registers itself, saves the key to `~/.crucihil/credentials.toml`, and connects — no manual steps needed.

```toml
# rigs/my_bench.toml
[rig.cloud]
url                = "https://crucihil-server.fly.dev"
registration_token = "your-REGISTRATION_TOKEN-here"
# api_key is saved automatically after first boot
```

Options:

```
--rig, -r   PATH   Path to rig TOML config (required)
--cache     PATH   SQLite result cache path (default: ~/.crucihil/results.db)
--verbose   -v     Enable debug logging
```

---

## Writing Tests

```python
async def test_engine_startup(rig: Rig):
    await rig.can.send(message="EngineControl", fields={"Throttle": 50.0, "Mode": 1.0})
    result = await rig.can.expect(
        signal="EngineData.RPM",
        condition=lambda v: v > 800,
        timeout=2.0,
    )
    assert result.passed, result.fail_msg
```

Switch from virtual to real hardware: change `--rig rigs/virtual.toml` to `--rig rigs/my_bench.toml`. The test is unchanged.

### Fault injection

```python
async def test_can_dropout_recovery(rig: Rig):
    async with rig.fault.inject(rig.fault.can_dropout(arb_id=0x100, duration=1.0)):
        await asyncio.sleep(1.0)
    result = await rig.can.expect("EngineData.RPM", lambda v: v > 0, timeout=3.0)
    assert result.passed, result.fail_msg
```

### ECU firmware flash

```python
async def test_firmware_update(rig: Rig):
    result = await rig.ecu.ecu_main.flash("builds/firmware_v2.1.bin")
    assert result.success, result.error
```

---

## Complete Rig Setup Workflow

This is the end-to-end process for bringing a new bench machine online.

### Step 1 — Install CruciHiL on the bench machine

```bash
pip install crucihil
```

### Step 2 — Run the setup wizard

```bash
crucihil init
```

Follow the prompts. At the end it will ask:

```
Connect this rig to a CruciHiL cloud server? [y/N]:
```

Answer `y` and provide:
- **Server URL** — `https://crucihil-server.fly.dev` (or your self-hosted URL)
- **Registration token** — the `REGISTRATION_TOKEN` from your server setup

The wizard registers the rig, saves the API key locally, and writes the `[rig.cloud]` section into the TOML. You won't need to handle the key manually.

### Step 3 — Start the agent

```bash
crucihil agent --rig rigs/<name>.toml
```

The rig appears as **connected** in the dashboard within seconds. You can now trigger test runs from the dashboard, CLI, or via AI through the MCP server.

### Step 4 (optional) — Install as a systemd service for production

```bash
sudo ./scripts/install-agent.sh \
  --rig rigs/my_bench.toml \
  --server https://crucihil-server.fly.dev \
  --key <api-key>
```

```bash
systemctl status crucihil-agent@my-bench
journalctl -u crucihil-agent@my-bench -f
systemctl restart crucihil-agent@my-bench
```

---

## Cloud Dashboard

The web dashboard is available at `https://app.crucihil.io`.

### First-time setup (self-hosted)

Bootstrap your first org and admin account via the setup API (one-time only):

```bash
curl -X POST https://your-server/api/v1/setup \
  -H 'Content-Type: application/json' \
  -d '{"org_name":"Acme Corp","admin_email":"you@company.com","admin_password":"strong-password"}'
```

Returns a JWT. Log in at the dashboard with the same email and password.

### Inviting team members

Admins can invite members from **Settings → Team → Invite member**. An email is sent via Resend with a link to set their password. Members can view results and trigger runs; admins can also manage rigs and users.

### Connecting an AI client (MCP)

Add to Claude Desktop `claude_desktop_config.json`:

```json
{
  "mcpServers": {
    "crucihil": {
      "url": "https://crucihil-mcp.fly.dev/sse",
      "headers": {
        "Authorization": "Bearer <your-CRUCIHIL_API_KEY>"
      }
    }
  }
}
```

| MCP Tool | What it does |
|---|---|
| `list_rigs` | List rigs with online/offline status |
| `get_rig_config` | Hardware summary for one rig |
| `list_runs` | Query run history |
| `get_run_summary` | Pass/fail counts and status for one run |
| `run_test_suite` | Trigger a test suite on a connected rig |
| `cancel_run` | Cancel an active run |
| `get_results` | Per-test results (filterable by status) |
| `get_signal_trace` | Signal telemetry recorded during a run |
| `describe_failure` | Full failure context in one call — errors + signals + logs |
| `list_signals` | Parse a DBC and return all signal names |
| `list_tests` | Parse a YAML manifest and return test metadata |

---

## Self-Hosting with Docker

```bash
# Clone and configure
cp .env.example .env
# Edit .env — set POSTGRES_PASSWORD and SECRET_KEY

# Bootstrap everything in one command
./setup.sh

# Start the dashboard dev server (hot reload)
./dev.sh
```

Dashboard at `http://localhost:5173`.

```
./setup.sh --status    # service health
./setup.sh --restart   # restart containers
./dev.sh --rig rigs/my_bench.toml   # register and start a native rig agent
```

### Deploy to Fly.io

```bash
fly deploy --config fly.server.toml   # control plane
fly deploy --config fly.mcp.toml      # MCP server
```

Set secrets (not in toml files):

```bash
fly secrets set SECRET_KEY="..." REGISTRATION_TOKEN="..." RESEND_API_KEY="re_..." --app crucihil-server
```

Dashboard deploys automatically to Vercel on push to `main`.

---

## Supported Hardware Backends

| Bus | Backends |
|---|---|
| CAN | `socketcan` (Linux), `peak` (PEAK PCAN-USB), `virtual` |
| SOME/IP | `vsomeip`, `virtual_someip` |
| DoIP | `python-doip`, `virtual` |
| Power | `gpio_relay`, `bench_psu`, `virtual_power` |
| GPIO | `linux_gpio`, `virtual_gpio` |
| Custom | `backend = "myorg.module:ClassName"` via importlib |

---

## Project Structure

```
crucihil/
├── hal/          Layer 2: Rig HAL (backends, BSE, namespaces, config)
├── agent/        Layer 3: Test runner, agent daemon, SQLite cache, init wizard
├── server/       Layer 4: FastAPI control plane + PostgreSQL
├── mcp/          Layer 5: MCP server (11 tools, FastMCP 3.x)
└── cli/          Layer 6: CLI entry point

rigs/             Rig TOML configs (hardware details — never in test code)
tests/
├── unit/         Unit tests (361 passing)
└── integration/  Integration tests against virtual rig
scripts/
├── release.sh    Cut a new release: ./scripts/release.sh 0.2.0
└── install-agent.sh  Install agent as systemd service on a bench machine
```

---

## Cutting a Release

```bash
./scripts/release.sh 0.2.0
```

Bumps `pyproject.toml`, commits, tags `v0.2.0`, pushes. GitHub Actions publishes to PyPI and creates a GitHub Release automatically.

---

## Requirements

- Python 3.11+
- Real hardware additionally requires: Linux (for SocketCAN/GPIO), vsomeip, python-doip, and relevant drivers
