Metadata-Version: 2.4
Name: hiveframe
Version: 0.1.0
Summary: Proxmox automation toolkit — define infrastructure as YAML, provision with one command
Project-URL: Homepage, https://codeberg.org/zcross/hiveframe
Project-URL: Repository, https://codeberg.org/zcross/hiveframe
Author-email: zcross <crossztulip@gmail.com>
License: MIT
License-File: LICENSE
Requires-Python: >=3.11
Requires-Dist: click>=8.0
Requires-Dist: fastapi>=0.100
Requires-Dist: httpx>=0.24
Requires-Dist: jinja2>=3.1
Requires-Dist: proxmoxer>=2.0
Requires-Dist: pydantic-settings>=2.0
Requires-Dist: pydantic>=2.0
Requires-Dist: pyyaml>=6.0
Requires-Dist: requests>=2.31.0
Requires-Dist: rich>=13.0
Requires-Dist: uvicorn>=0.23
Provides-Extra: dev
Requires-Dist: mypy>=1.10.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.23.0; extra == 'dev'
Requires-Dist: pytest-mock>=3.12.0; extra == 'dev'
Requires-Dist: pytest>=8.0.0; extra == 'dev'
Requires-Dist: ruff>=0.4.0; extra == 'dev'
Requires-Dist: types-pyyaml>=6.0.0; extra == 'dev'
Requires-Dist: types-requests>=2.31.0; extra == 'dev'
Description-Content-Type: text/markdown

# Hiveframe

**Proxmox automation toolkit — define infrastructure as YAML, provision with one command**

![Python](https://img.shields.io/badge/python-3.11%2B-blue)
![License](https://img.shields.io/badge/license-MIT-green)
[![Codeberg](https://img.shields.io/badge/codeberg-itzdrixxyy%2Fhiveframe-blue)](https://codeberg.org/itzdrixxyy/hiveframe)

---

## What it does

Hiveframe lets you define Proxmox infrastructure as YAML and provision it with one command. VLANs, VMs, cloud-init config, and network assignment — all declarative, all version-controllable. Built for VMware refugees and MSPs who want GitOps-style workflows without enterprise tooling.

## Why

Proxmox has a great API but no native declarative provisioning layer. The Terraform provider exists but pulls in significant overhead for straightforward use cases. Hiveframe is the lightweight middle ground — a YAML file, a CLI, and no external state backends.

---

## Quick start

```bash
pip install hiveframe  # coming soon — see Development below for now
```

Define your stack:

```yaml
name: hiveframe-test
node: pve

vlans:
  - name: test-vlan
    vlan_id: 99
    cidr: 192.168.99.0/24
    gateway: 192.168.99.1
    description: Hiveframe test VLAN - safe to delete

vms:
  - name: hiveframe-test-vm
    template_id: 9000
    vlan: test-vlan
    cores: 2
    memory_mb: 2048
    disk_gb: 20
    start_on_create: false
    cloud_init:
      user: hiveframe
      password: changeme
      ssh_keys: []
      ip_config: dhcp
```

Configure your Proxmox connection at `~/.hiveframe/config.yaml`:

```yaml
proxmox_host: 10.0.20.1
proxmox_user: root@pam
proxmox_token_id: hiveframe
proxmox_token_secret: your-token-secret
proxmox_verify_ssl: false
```

Then provision:

```bash
hiveframe validate      # check stack.yaml against the schema
hiveframe plan          # dry-run — show what would be created
hiveframe apply         # create VLANs and VMs on the Proxmox node
hiveframe status        # compare live state against the state file
hiveframe destroy       # tear everything down
```

---

## Commands

| Command              | Description                                              |
|----------------------|----------------------------------------------------------|
| `hiveframe init`     | Scaffold a new `stack.yaml` in the current directory     |
| `hiveframe validate` | Validate `stack.yaml` against the schema                 |
| `hiveframe plan`     | Dry-run — show what would be created or skipped          |
| `hiveframe apply`    | Provision VLANs and VMs against a live Proxmox node      |
| `hiveframe status`   | Show live drift between state file and Proxmox           |
| `hiveframe destroy`  | Tear down all resources defined in the stack             |

All commands accept `-f / --file` to point at a non-default stack file.  
The root group accepts `--debug` to print the resolved config path.

---

## Stack reference

### Top-level

| Field         | Type     | Default | Description                              |
|---------------|----------|---------|------------------------------------------|
| `name`        | `str`    | —       | Unique stack identifier                  |
| `description` | `str`    | `""`    | Optional free-text description           |
| `node`        | `str`    | `pve`   | Proxmox node name to provision on        |
| `vlans`       | `list`   | `[]`    | List of `VlanConfig` entries             |
| `vms`         | `list`   | `[]`    | List of `VmConfig` entries               |

### VlanConfig

| Field         | Type      | Default | Description                              |
|---------------|-----------|---------|------------------------------------------|
| `name`        | `str`     | `""`    | Friendly identifier, referenced by VMs   |
| `vlan_id`     | `int`     | —       | 802.1Q VLAN ID (1–4094)                  |
| `cidr`        | `str`     | —       | Subnet in CIDR notation, e.g. `10.0.1.0/24` |
| `gateway`     | `str`     | `null`  | Gateway IP — assigned to the bridge      |
| `description` | `str`     | `""`    | Written as a comment on the bridge       |

Hiveframe creates a dedicated Linux bridge per VLAN: `vmbr<vlan_id>`.

### VmConfig

| Field             | Type     | Default      | Description                                    |
|-------------------|----------|--------------|------------------------------------------------|
| `name`            | `str`    | —            | VM hostname and Proxmox name (RFC hostname)    |
| `template_id`     | `int`    | —            | VMID of the template to clone                  |
| `vlan`            | `str`    | —            | `name` of the VLAN to attach `net0` to         |
| `cores`           | `int`    | `2`          | vCPU count                                     |
| `memory_mb`       | `int`    | `2048`       | RAM in megabytes                               |
| `disk_gb`         | `int`    | `20`         | Total disk size in GB — resizes above template |
| `storage`         | `str`    | `local-lvm`  | Proxmox storage pool for disk and cloud-init   |
| `start_on_create` | `bool`   | `true`       | Start VM immediately after provisioning        |
| `tags`            | `list`   | `[]`         | Proxmox tags to apply                          |
| `cloud_init`      | `object` | see below    | Cloud-init configuration block                 |
| `firewall_rules`  | `list`   | `[]`         | Per-VM firewall rules (provisioning coming soon)|

### CloudInitConfig

| Field           | Type       | Default    | Description                                                         |
|-----------------|------------|------------|---------------------------------------------------------------------|
| `user`          | `str`      | `ubuntu`   | Default user created by cloud-init                                  |
| `password`      | `str`      | `null`     | Plain-text password — avoid in production, use `ssh_keys` instead   |
| `ssh_keys`      | `list[str]`| `[]`       | Authorized public keys                                              |
| `ip_config`     | `str`      | `null`     | `dhcp`, or Proxmox format: `ip=10.0.1.5/24,gw=10.0.1.1`           |
| `nameservers`   | `list[str]`| `[]`       | DNS servers                                                         |
| `search_domain` | `str`      | `null`     | DNS search domain                                                   |

### State file

After `apply`, Hiveframe writes `.hiveframe-state.json` next to your `stack.yaml`. This file tracks VMIDs and VLAN bridges and is required for `status` and `destroy`. Add it to `.gitignore` if your stack contains sensitive values.

---

## Configuration

`~/.hiveframe/config.yaml` — loaded automatically. All fields can also be set via environment variables with the `HIVEFRAME_` prefix.

| Field                   | Env var                          | Description                        |
|-------------------------|----------------------------------|------------------------------------|
| `proxmox_host`          | `HIVEFRAME_PROXMOX_HOST`         | IP or hostname, no scheme          |
| `proxmox_user`          | `HIVEFRAME_PROXMOX_USER`         | Default: `root@pam`                |
| `proxmox_token_id`      | `HIVEFRAME_PROXMOX_TOKEN_ID`     | API token name                     |
| `proxmox_token_secret`  | `HIVEFRAME_PROXMOX_TOKEN_SECRET` | API token secret                   |
| `proxmox_verify_ssl`    | `HIVEFRAME_PROXMOX_VERIFY_SSL`   | Default: `false`                   |

---

## Requirements

- Python 3.11+
- Proxmox VE 7.x, 8.x, or 9.x
- API token with `root@pam` (privilege separation not yet supported)
- Template VM with cloud-init drive (`ide2`) for VM provisioning

---

## Development

```bash
git clone https://codeberg.org/itzdrixxyy/hiveframe.git
cd hiveframe
pip install -e ".[dev]"
pytest
```

Copy `stack.yaml.example` to `stack.yaml` and fill in your values before running `apply`.

---

## Roadmap

- Firewall rule provisioning
- Drift detection (`status --watch`)
- Web dashboard (FastAPI + HTMX)
- Multi-node support
- `pip`-installable package on PyPI

---

## License

MIT
