Metadata-Version: 2.4
Name: proxmox-template-builder
Version: 1.0.19
Summary: Interactive TUI for building and deploying Proxmox VM templates using Packer, Ansible, and cloud-init
License: MIT License
        
        Copyright (c) 2025 Proxmox Packer Templates Contributors
        
        Permission is hereby granted, free of charge, to any person obtaining a copy
        of this software and associated documentation files (the "Software"), to deal
        in the Software without restriction, including without limitation the rights
        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
        copies of the Software, and to permit persons to whom the Software is
        furnished to do so, subject to the following conditions:
        
        The above copyright notice and this permission notice shall be included in all
        copies or substantial portions of the Software.
        
        THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
        SOFTWARE.
        
Project-URL: Homepage, https://github.com/YOUR_ORG/linux-automation
Project-URL: Repository, https://github.com/YOUR_ORG/linux-automation
Project-URL: Issues, https://github.com/YOUR_ORG/linux-automation/issues
Keywords: proxmox,packer,ansible,vm,template,automation,tui
Classifier: Development Status :: 4 - Beta
Classifier: Environment :: Console
Classifier: Intended Audience :: System Administrators
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: POSIX :: Linux
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: System :: Systems Administration
Classifier: Topic :: System :: Installation/Setup
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: textual>=0.90.0
Requires-Dist: pyyaml>=6.0
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.21; extra == "dev"
Requires-Dist: build; extra == "dev"
Dynamic: license-file

# Proxmox VM Template Builder & Deployment System

**Automated VM template creation and deployment for Proxmox using Packer and Ansible**

Build standardized VM templates with Packer, then deploy and configure them with Ansible playbooks - all through an interactive TUI menu.

## 🎯 Key Features

- ✅ **Build Templates**: 9 Linux distributions with Packer (Ubuntu, Rocky, AlmaLinux, Fedora, openSUSE)
- ✅ **Deploy VMs**: Clone templates and apply Ansible configurations in one workflow
- ✅ **Batch Building**: Build all templates sequentially or in parallel (configurable concurrency)
- ✅ **Network-Safe**: Cloud-init networking disabled to prevent DHCP timeouts and boot delays
- ✅ **Package Profiles**: Minimal, Base, Extended (configurable package sets)
- ✅ **Interactive TUI**: Dialog-based menu for all operations
- ✅ **Automated Testing**: Built-in validation for templates and deployments

## 📋 Quick Start

### Install from PyPI (Recommended)

```bash
pip install proxmox-template-builder

# Launch the TUI
ptb
# or
proxmox-template-builder
```

### Custom Configuration

Override bundled defaults without modifying the installed package:

```bash
# Create your config directory
mkdir -p ~/.config/proxmox-template-builder/packages

# Override default settings (only the keys you want to change)
cat > ~/.config/proxmox-template-builder/defaults.conf <<EOF
DEFAULT_VM_CORES=4
DEFAULT_VM_MEMORY=4096
EOF

# Or point to a custom config directory
ptb --config-dir /path/to/your/config
# Or via environment variable
export PROXMOX_BUILDER_CONFIG_DIR=/path/to/your/config
```

**Config override lookup order** (highest priority first):
1. `--config-dir` CLI argument
2. `$PROXMOX_BUILDER_CONFIG_DIR` environment variable
3. `~/.config/proxmox-template-builder/`
4. Bundled defaults (always loaded as base)

**Override files** (create only what you need):
| File | Merge Strategy |
|------|---------------|
| `defaults.conf` | Key-level override — your keys replace bundled keys |
| `distributions.yml` | Deep merge — add distros/versions without replacing existing |
| `packages/base-packages.yml` | Top-level key replacement — your `apt_packages` list replaces the bundled one |
| `packages/extended-packages.yml` | Same as base-packages |

### Prerequisites

```bash
# On Proxmox host
apt-get update
apt-get install git dialog ansible python3-pip -y

# Install Packer
wget https://releases.hashicorp.com/packer/1.10.0/packer_1.10.0_linux_amd64.zip
unzip packer_1.10.0_linux_amd64.zip
mv packer /usr/local/bin/
```

### Setup (from Git)

```bash
# Clone repository
cd /root
git clone https://github.com/yourusername/linux_automation.git
cd linux_automation

# Configure Proxmox credentials
cp packer/proxmox.auto.pkrvars.hcl.example packer/proxmox.auto.pkrvars.hcl
vim packer/proxmox.auto.pkrvars.hcl

# Configure defaults
cp config/defaults.conf.example config/defaults.conf
vim config/defaults.conf  # Set cloud-init user/password

# Run interactive TUI
pip install .
ptb
# Or use the legacy bash menu
./build-template.sh
```

## 🚀 Main Features

### 1. Build Templates

**Single Template:**
- Select distribution and version
- Choose package profile (Minimal/Base/Extended)
- Configure resources (CPU, RAM, disk)
- Auto-downloads ISOs if needed
- Build time: 15-30 minutes

**Build All Templates (New!):**
- Builds all 9 distributions automatically
- **Parallel mode**: 4 concurrent builds (configurable), ~2 hours total
- **Sequential mode**: One at a time, ~3.5 hours total
- Separate logs for each build
- Auto-configures cloud-init credentials

Supported distributions:
- Ubuntu 22.04, 24.04 (Server + Desktop)
- Rocky Linux 9
- AlmaLinux 9
- Fedora 42 (Server + Desktop)
- openSUSE Leap 15.6 (Server + Desktop)

**Note:** Debian 12 temporarily disabled (requires preseed configuration rewrite)

### 2. Deploy VMs from Templates (New!)

**Complete deployment workflow:**
1. Select template to clone
2. Configure VM (ID, name, CPU, RAM, disk size)
3. Choose network mode (DHCP or static)
4. Select clone type (linked or full)
5. **Select Ansible playbooks** (multi-select)
6. Confirm and deploy

**Available Ansible Playbooks:**
- `base-config` - Hostname, timezone, system updates
- `users` - User management and SSH keys
- `nginx` - Web server
- `postgresql` - Database server (auto-detects version)
- `mongodb` - MongoDB database
- `docker-compose` - Docker and Docker Compose
- `k3s` - Single-node Kubernetes cluster (kubectl, helm included)
- `k3s-cluster` - **3-Node K3S Cluster** (deploys 3 VMs automatically!)
- `monitoring` - Prometheus Node Exporter

**Clone Types:**
- **Linked clone** (default): Fast, space-efficient, requires template
- **Full clone**: Independent copy, slower but standalone

**Deployment logs:** `/linux_automation/logs/deploy-*.log`

### 3. K3S 3-Node Cluster Deployment (New!)

**Special automated deployment** that creates a complete Kubernetes cluster with 1 server and 2 agent nodes.

**How it works:**
1. Select "k3s-cluster" from the playbook menu (mutually exclusive with "k3s" single-node)
2. Configure base VM specs (each node gets these specs)
3. System automatically:
   - Finds 3 consecutive available VMIDs (default: starts from 200)
   - Deploys 3 VMs from selected template (server, agent1, agent2)
   - Installs K3S server on first node
   - Retrieves cluster join token
   - Installs K3S agents on remaining nodes
   - Forms complete cluster automatically

**Resource requirements:**
- **3x the configured specs** (e.g., 2 cores = 6 cores total, 2GB RAM = 6GB total)
- Storage: Depends on clone type (linked: minimal, full: 3x disk size)
- Network: All nodes on same network, DHCP assigned IPs

**Cluster configuration:**
- K3S version: stable (configurable in deploy script)
- Server features: kubeconfig accessible, Traefik disabled by default
- Networking: Flannel CNI (default)
- Access: kubectl configured on server node

**After deployment:**
```bash
# Access the server node (first VM)
ssh admin@<server-ip>

# Verify cluster
kubectl get nodes
# Should show: server + 2 agents, all Ready

# View cluster info
kubectl cluster-info

# Check system pods
kubectl get pods -A
```

**Access from local workstation:**
```bash
# Copy kubeconfig from server node
scp admin@<server-ip>:/etc/rancher/k3s/k3s.yaml ~/.kube/config

# Update server IP in config
sed -i 's/127.0.0.1/<server-ip>/' ~/.kube/config

# Use kubectl locally
kubectl get nodes
```

**Usage instructions:** Each node has detailed instructions in `/root/k3s-server-usage.txt` (server) and `/root/k3s-agent-usage.txt` (agents).

**Troubleshooting:**
- Check deployment logs: `logs/deploy-k3s-cluster-*.log`
- Verify connectivity: All nodes must reach each other on port 6443
- Agent not joining: Check token and server IP in agent logs
- View service status: `systemctl status k3s` (server) or `systemctl status k3s-agent` (agents)
- View logs: `journalctl -u k3s -f` or `journalctl -u k3s-agent -f`

**Uninstall:**
```bash
# On agents first
/usr/local/bin/k3s-agent-uninstall.sh

# Then on server
/usr/local/bin/k3s-uninstall.sh
```

### 4. Manage VMs and Templates

- View all VMs and templates with status
- Multi-select with SPACE bar
- Auto-stops running VMs before destruction
- Detailed confirmation and logging

## 📦 Package Profiles

| Profile | Description | Use Case |
|---------|-------------|----------|
| **Minimal** | OS only | Custom builds |
| **Base** | + qemu-guest-agent, cloud-init, essential utils | Most use cases |
| **Extended** | + dev tools, monitoring, network debugging | Development/ops servers |

**Extended packages include:**
- **HashiCorp Tools:** Terraform, Packer (from HashiCorp repository)
- **Kubernetes Tools:** kubectl (from Kubernetes repo), Helm, k9s (custom binaries)
- **Cloud CLIs:** AWS CLI v2, Azure CLI (RHEL-family: skipped, use pip)
- **Development:** Git, Python, build-essential/gcc, pip
- **Monitoring:** htop, tmux, sysstat, iotop, iftop
- **Network:** tcpdump, nmap, netcat, dnsutils
- **Automation:** Ansible
- **Note:** Docker removed from extended profile (use docker-compose playbook instead)
- **PATH Configuration:** Custom binaries in `/usr/local/bin` are automatically added to system PATH

## 🗂️ Repository Structure

```
linux_automation/
├── build-template.sh          # Legacy bash TUI (fallback)
├── tui/                       # Python Textual TUI (pip-installable)
│   ├── app.py                 # Main Textual App
│   ├── screens/               # Wizard screens (build, deploy, manage)
│   ├── services/              # Proxmox API wrapper, script runner
│   ├── config/loader.py       # Config loading with external override support
│   └── data/                  # Bundled data files (package_data)
│       ├── config/            # defaults, distributions, package profiles
│       ├── scripts/           # deploy-vm.sh, deploy-k3s-cluster.sh
│       ├── packer/            # HCL2 templates by distro
│       ├── ansible/           # Ansible playbooks and config
│       └── cloud-init/        # Cloud-init templates
├── config -> tui/data/config  # Symlinks for backward compatibility
├── scripts -> tui/data/scripts
├── packer -> tui/data/packer
├── ansible -> tui/data/ansible
└── tests -> tui/data/tests
```

## 🎨 Template Features

**All templates include:**
- Cloud-init ready (credentials set automatically)
- QEMU guest agent
- Console IP display (shows IP on login screen)
- SSH configured
- Network via DHCP (cloud-init networking disabled)
- Package profile (minimal/base/extended)

**Desktop templates include:**
- GUI environment (GNOME/KDE)
- 4GB RAM, 25GB disk (auto-configured)

## 🔧 Configuration Files

### `config/defaults.conf`
```bash
# OS-level credentials (cloud-init)
DEFAULT_CLOUDINIT_USER="admin"
DEFAULT_CLOUDINIT_PASSWORD=""  # REQUIRED: Set a strong password

# Lab credentials (databases, apps, secondary users)
LAB_USER="labuser"
LAB_PASSWORD=""  # REQUIRED: Set a strong password

# Resources
DEFAULT_VM_CORES=2
DEFAULT_VM_MEMORY=2048
```

**Lab Credentials Feature:**
For quick lab deployments, `LAB_USER` and `LAB_PASSWORD` are automatically applied to:
- Database users (MySQL, PostgreSQL, MSSQL)
- MSSQL SA password
- Application accounts
- See [LAB_CREDENTIALS.md](LAB_CREDENTIALS.md) for details

⚠️ **Lab use only** - Use unique passwords for production!

### `packer/proxmox.auto.pkrvars.hcl`
```hcl
proxmox_url  = "https://proxmox.local:8006/api2/json"
proxmox_node = "pve"
proxmox_username = "root@pam"
proxmox_password = "your-password"
```

## 📝 Usage Examples

### Build All Templates (Parallel)
```bash
./build-template.sh
→ Build All Templates
→ Parallel Build
```

### Deploy VM with Multiple Playbooks
```bash
./build-template.sh
→ Deploy VM from Template
→ Select ubuntu-2404-extended
→ Configure VM settings
→ Select clone type: linked
→ Select playbooks: base-config, nginx, monitoring
→ Confirm deployment
```

### Deploy K3S 3-Node Cluster
```bash
./build-template.sh
→ Deploy VM from Template
→ Select ubuntu-2404-base
→ Configure VM: 2 cores, 4GB RAM, 20GB disk
→ Select clone type: linked
→ Select playbook: k3s-cluster (only)
→ Confirm deployment

# Result: 3 VMs deployed with K3S cluster ready
# Access via: ssh admin@<server-ip>
# Then run: kubectl get nodes
```

### CLI Usage
```bash
# Deploy VM from template
./scripts/deploy-vm.sh 9200 100 "web-server" 2 2048 20G dhcp linked base-config nginx monitoring

# Template ID: 9200
# New VM ID: 100
# Name: web-server
# Resources: 2 cores, 2048MB RAM, 20GB disk
# Network: dhcp
# Clone: linked
# Playbooks: base-config, nginx, monitoring
```

## ✅ Testing

### Test Template
```bash
./tests/test-template.sh TEMPLATE_ID

# Tests:
# - Template exists and is valid
# - VM creation from template
# - QEMU agent functionality
# - Cloud-init completion
# - Cloud-init networking DISABLED ✓
# - Package installation (by profile)
# - Network connectivity
```

### Verify Deployment
```bash
# Check VM status
qm status VM_ID

# Get VM IP
qm agent VM_ID network-get-interfaces

# View deployment logs
tail -f logs/deploy-*.log

# View Ansible logs
tail -f logs/ansible-*.log
```

## 🐛 Troubleshooting

### Build Issues

**ISO download fails:**
```bash
cd /var/lib/vz/template/iso
./scripts/download-iso.sh ubuntu 24.04
```

**Packer errors:**
```bash
cd packer && packer init .
```

### Deployment Issues

**No IP address:**
- Wait 30-60 seconds for DHCP
- Check: `qm agent VM_ID network-get-interfaces`
- Verify network: `qm config VM_ID | grep net0`

**Ansible playbook fails:**
- Check SSH access: `ssh admin@VM_IP`
- View logs: `tail -f logs/ansible-*.log`
- Verify password in `config/defaults.conf`

**PostgreSQL version error:**
- Fixed! Now auto-detects PostgreSQL version
- Ubuntu 24.04 uses PostgreSQL 16
- Ubuntu 22.04 uses PostgreSQL 14

### Common Errors

| Error | Solution |
|-------|----------|
| "Template ID already exists" | Choose different ID or destroy existing |
| "VM ID X already exists in cluster" | Use suggested next available ID |
| "Failed to get VM IP address" | Wait longer, check QEMU agent status |
| "No package matching 'postgresql-14'" | Update playbook (fixed in latest version) |

## 🔒 Security

**Template Security:**
- Cloud-init credentials set per `defaults.conf`
- SSH keys configured via cloud-init or Ansible playbooks
- SELinux/AppArmor enabled by default

**Deployment Security:**
- Ansible passwords stored in memory only
- SSH password auth can be disabled via playbooks
- Firewall configuration via Ansible playbooks

## 📚 Documentation

- **[ansible/README.md](ansible/README.md)** - Ansible playbook details and customization
- **[NETWORK-CONFIG-GUIDE.md](NETWORK-CONFIG-GUIDE.md)** - Network troubleshooting
- **[ARCHITECTURE.md](ARCHITECTURE.md)** - Technical design decisions

## 💡 Best Practices

1. **Use parallel builds** for faster template creation (4 concurrent by default)
2. **Use linked clones** for fast deployment and space efficiency
3. **Select base-config** playbook for all deployments (sets hostname, updates, etc.)
4. **Test templates** after building: `./tests/test-template.sh TEMPLATE_ID`
5. **Check logs** when troubleshooting: `logs/deploy-*.log` and `logs/ansible-*.log`
6. **Regular updates**: Rebuild templates monthly for security patches

## 🎯 Quick Reference

```bash
# Install from PyPI
pip install proxmox-template-builder

# Launch TUI (short alias)
ptb

# Launch with custom config
ptb --config-dir /path/to/config

# Build all templates in parallel
ptb → Build All Templates → Parallel

# Deploy VM with configurations
ptb → Deploy VM → Select template → Configure → Select playbooks

# Deploy K3S 3-node cluster
ptb → Deploy VM → Select template → Configure → Select k3s-cluster → Confirm

# Manage VMs
ptb → Manage VMs and Templates → Multi-select → Confirm

# Legacy bash menu (also still works)
./build-template.sh

# Test template
./tests/test-template.sh TEMPLATE_ID

# Manual deployment
./scripts/deploy-vm.sh TEMPLATE_ID NEW_VM_ID NAME CORES MEMORY DISK_SIZE NETWORK CLONE_TYPE [playbooks...]

# K3S cluster CLI deployment
./scripts/deploy-k3s-cluster.sh -t TEMPLATE_ID -n "cluster-name" -c 2 -m 4096 -d 20G -k linked -v stable

# View logs
tail -f logs/deploy-$(ls -t logs/deploy-* | head -1 | xargs basename)
```

---

**Remember:** Cloud-init networking is disabled in all templates to prevent boot delays!
