The nervous system for your network. A unified, async-first Python SDK and CLI for multi-vendor network automation — with AI, security, topology, and intent-based management built in.
Plexar is a layered platform SDK. Each layer has a single responsibility and communicates with adjacent layers through clean interfaces. No layer reaches past its neighbor.
Every driver, every operation, every I/O call is async/await. Running 100 devices concurrently is as natural as running 1.
All vendors implement BaseDriver. Application code never references a specific vendor — only device.get_bgp_summary().
Sanitization, auditing, and RBAC run in the execution path, not as opt-in decorators. You cannot accidentally bypass them.
OperationRunner, StateStore, AuditSink, EventBus are all ABCs. The tool uses local defaults. The future platform swaps them without touching application code.
Core installs in seconds. AI, gNMI, topology, vault — each is a separate pip install plexar[extra]. Use only what you need.
Third parties publish plugins as ordinary Python packages with entry points. Install the package; Plexar auto-discovers it.
device.get_bgp_summary()Tracing a single call from user code to device and back:
# What the user writes
async with device:
bgp = await device.get_bgp_summary()
print(bgp.peers_established)
# What happens internally — all automatic:
# 1. RBAC: is current context role >= VIEWER?
# 2. Sanitize: is device.hostname safe?
# 3. DriverRegistry: platform="arista_eos" → AristaEOSDriver
# 4. Scrapli: SSH to management_ip:22
# 5. Run: "show bgp summary | json"
# 6. Parse: JSON → BGPSummary(Pydantic model)
# 7. Audit: {event: COMMAND_EXECUTE, hostname: ..., success: true}
# 8. Return typed BGPSummary object
Intent-based networking separates what you want from how to configure it. You write vendor-neutral primitives. Plexar compiles them to each vendor's CLI.
# Arista EOS output:
router bgp 65001
neighbor 10.0.0.1 remote-as 65000
neighbor 10.0.0.1 description spine-01
# Cisco IOS output (same primitive):
router bgp 65001
neighbor 10.0.0.1 remote-as 65000
neighbor 10.0.0.1 description spine-01
# Juniper JunOS output (same primitive):
set protocols bgp group UNDERLAY neighbor 10.0.0.1 peer-as 65000
set protocols bgp group UNDERLAY description spine-01
Every operation passes through five security controls automatically:
| Control | What it does | Where it runs |
|---|---|---|
| Sanitizer | Blocks command injection, prompt injection, SSTI, path traversal, output overflow | Before every device call and AI call |
| RBAC | Checks caller role against required permission level | Entry point of every operation |
| Secrets | Fetches credentials from Vault/keyring/env — never from plaintext | At connection time |
| TLS/SSH Policy | Enforces cipher suites and host key verification | At transport setup |
| Audit Logger | Appends every event to immutable JSONL + optional SIEM sink | After every operation completes |
Three AI-powered features, each solving a different problem:
LLM parses raw CLI output into typed Pydantic models when the built-in regex/TextFSM parser fails. LRU cached — same output never hits the API twice.
Root cause analysis. Given a network event, gathers live device context, topology, and drift data, then produces a structured diagnosis with remediation steps.
Natural language questions about the fleet. "Which leafs have BGP peers down?" → plan → concurrent device queries → LLM synthesizes an answer.
Capture a live snapshot of your network's state, then simulate failures before touching production:
await twin.capture(network=net)
result = twin.simulate_interface_failure("leaf-01", "Ethernet1")
# → risk_score: 72/100
# → affected_devices: ["leaf-01", "server-rack-3"]
# → warnings: ["leaf-01 has no redundant uplink on this segment"]
result = twin.validate_intent(intent)
# → passed: True (safe to apply)
# OR
# → conflicts: ["BGP ASN 65001 already configured on spine-01"]
Third parties extend Plexar by declaring an entry point. No code changes to Plexar required:
# In third-party pyproject.toml:
[project.entry-points."plexar.plugins"]
aruba_cx = "plexar_aruba.plugin:ArubaOSCXPlugin"
# After pip install plexar-aruba:
# DriverRegistry automatically has "aruba_cx" → ArubaOSCXDriver
# inventory.yaml platform: aruba_cx just works
Plexar v1 is a tool — it runs wherever Python runs. There is no server to operate, no database to manage, no daemon to keep alive. Pick the scenario that matches your environment.
| Requirement | Minimum | Recommended |
|---|---|---|
| Python | 3.10 | 3.11 or 3.12 |
| OS | Linux, macOS, Windows (WSL2) | Ubuntu 22.04 LTS |
| RAM | 256 MB | 1 GB (4 GB if using local AI) |
| Disk | 100 MB | 5 GB (if using local AI model) |
| Network | SSH access to devices | Dedicated management network |
| Python access | pip | uv (10× faster installs) |
The fastest path. Run Plexar from your laptop against a lab or staging network.
# macOS
brew install python@3.11
# Ubuntu / Debian
sudo apt update && sudo apt install python3.11 python3.11-venv python3-pip
# Verify
python3 --version
# Create and activate
python3.11 -m venv ~/.venvs/plexar
source ~/.venvs/plexar/bin/activate # Linux/macOS
# ~/.venvs/plexar/Scripts/activate # Windows WSL2
# Install core + CLI + topology
pip install "plexar[cli,topology]"
# Install everything
pip install "plexar[all]"
# Verify
plexar --version
# inventory.yaml
devices:
- hostname: spine-01
management_ip: 192.168.1.1
platform: arista_eos
credentials:
username: admin
password_env: NET_PASSWORD
metadata:
role: spine
site: dc1
- hostname: leaf-01
management_ip: 192.168.1.10
platform: arista_eos
credentials:
username: admin
password_env: NET_PASSWORD
metadata:
role: leaf
site: dc1
export NET_PASSWORD="your-device-password"
plexar --inventory inventory.yaml devices list
plexar --inventory inventory.yaml bgp show spine-01
plexar --inventory inventory.yaml bgp fleet
Install Plexar on a shared Linux server that has SSH access to all devices. Multiple engineers use it. Recommended for teams of 2–20.
A small VM is fine. 2 vCPU, 2 GB RAM, Ubuntu 22.04. It must have SSH access to your device management network.
# On the jump host
sudo apt update
sudo apt install python3.11 python3.11-venv python3-pip git -y
# Create a dedicated service user
sudo useradd -m -s /bin/bash plexar
sudo su - plexar
python3.11 -m venv ~/venv
source ~/venv/bin/activate
pip install "plexar[all]"
# Create config directory
mkdir -p ~/.plexar/state ~/.plexar/snapshots
mkdir -p ~/network/intent
# Option A: Environment file (simple)
cat > ~/.plexar/.env << 'EOF'
NET_PASSWORD=your-device-password
NET_SSH_KEY_PATH=/home/plexar/.ssh/network_key
OPENAI_API_KEY=sk-... # optional
EOF
chmod 600 ~/.plexar/.env
# Add to ~/.bashrc
echo 'set -a; source ~/.plexar/.env; set +a' >> ~/.bashrc
source ~/.bashrc
# Option B: OS keyring (more secure)
pip install "plexar[keyring]"
python3 -c "
from plexar.security.secrets import SecretsManager, KeyringBackend
import asyncio
secrets = SecretsManager(backend=KeyringBackend())
asyncio.run(secrets.set('net_password', 'your-device-password'))
"
cd ~/network
git init
git remote add origin https://github.com/yourorg/network-config
# Recommended structure
mkdir -p intent snapshots reports
# .gitignore
cat > .gitignore << 'EOF'
snapshots/
reports/
*.env
*.key
EOF
# /etc/systemd/system/plexar-drift.service
[Unit]
Description=Plexar Drift Monitor
After=network.target
[Service]
Type=simple
User=plexar
WorkingDirectory=/home/plexar/network
EnvironmentFile=/home/plexar/.plexar/.env
ExecStart=/home/plexar/venv/bin/python3 -c "
import asyncio
from plexar import Network
from plexar.state.drift import DriftMonitor
async def main():
net = Network()
net.inventory.load('yaml', path='inventory.yaml')
monitor = DriftMonitor(interval=300)
await monitor.start(devices=net.inventory.all())
asyncio.run(main())
"
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.target
sudo systemctl daemon-reload
sudo systemctl enable plexar-drift
sudo systemctl start plexar-drift
sudo systemctl status plexar-drift
# Each engineer SSHes to the jump host and uses plexar
ssh engineer@jump-host.corp.com
# They activate the shared venv
source /home/plexar/venv/bin/activate
# And run commands against the shared inventory
plexar --inventory ~/network/inventory.yaml bgp fleet
Run Plexar as part of a GitOps workflow. Engineers commit intent YAML files; the pipeline applies them.
mkdir -p .github/workflows
cp plexar/.github/workflows/network-ci.yml .github/workflows/
In your repository: Settings → Secrets → Actions
| Secret | Value |
|---|---|
| PLEXAR_SSH_KEY | Contents of your SSH private key |
| SLACK_WEBHOOK | Slack webhook URL (optional) |
| OPENAI_API_KEY | For AI features (optional) |
Settings → Environments → New: production. Add required reviewers. Production deploys will wait for approval before executing.
git checkout -b change/update-leaf-bgp
vim intent/leaf-bgp.yaml
git add intent/leaf-bgp.yaml
git commit -m "feat: add leaf-03 BGP neighbors"
git push origin change/update-leaf-bgp
# Open a PR → CI runs validate + plan + simulate + posts PR comment
# Merge to main → approval gate → apply + verify + compliance report
For environments with no outbound internet access.
# On a connected machine, download everything
pip download "plexar[all]" -d ./plexar-packages/
# Bundle the wheel files
tar czf plexar-packages.tar.gz plexar-packages/
# Transfer to air-gapped machine
scp plexar-packages.tar.gz user@airgapped-host:
tar xzf plexar-packages.tar.gz
python3.11 -m venv ~/venv
source ~/venv/bin/activate
pip install --no-index --find-links ./plexar-packages "plexar[all]"
plexar --version # works with zero internet
# Download Ollama on connected machine, transfer binary
# Download model weights on connected machine:
ollama pull qwen2.5-coder:7b
# Copy ~/.ollama/models/ to air-gapped machine
# Configure Plexar to use local AI
export PLEXAR_AI_BACKEND=local
export PLEXAR_AI_MODEL=qwen2.5-coder:7b
# AI features now work with zero outbound traffic
plexar ask "which leafs have BGP peers down?"
| Command | Installs | Use case |
|---|---|---|
| pip install plexar | Core SDK + CLI | Basic use, scripts |
| pip install plexar[cli] | + Click, Rich | Terminal use |
| pip install plexar[ai] | + litellm, openai | AI features (API-backed) |
| pip install plexar[topology] | + networkx | Topology analysis |
| pip install plexar[gnmi] | + grpcio, pygnmi | Streaming telemetry |
| pip install plexar[netconf] | + ncclient | NETCONF transport |
| pip install plexar[netbox] | + pynetbox | NetBox inventory |
| pip install plexar[nautobot] | + requests | Nautobot inventory |
| pip install plexar[vault] | + hvac | HashiCorp Vault secrets |
| pip install plexar[telemetry] | + opentelemetry | OTLP metrics export |
| pip install plexar[discovery] | + puresnmp | SNMP sweep (coming soon) |
| pip install plexar[all] | Everything | Full installation |
Everything starts with inventory.yaml. This tells Plexar about your devices.
devices:
- hostname: spine-01
management_ip: 192.168.1.1
platform: arista_eos # arista_eos | cisco_ios | cisco_nxos | juniper_junos
transport: ssh # ssh | netconf | gnmi
port: 22
credentials:
username: admin
password_env: NET_PASSWORD # env var name — never the actual password
# OR: ssh_key_env: SSH_KEY_ENV
# OR: ssh_key: /path/to/key
metadata:
role: spine
site: dc1
rack: A1
tags: [production, bgp-core]
# ── Devices ──────────────────────────────────────────
plexar devices list # all devices
plexar devices list --role leaf --site dc1 # filtered
plexar devices connect spine-01 # test SSH
plexar devices show spine-01 # platform + BGP + interfaces
# ── BGP ──────────────────────────────────────────────
plexar bgp show spine-01 # single device
plexar bgp fleet # all devices
plexar bgp fleet --role leaf # filtered
# ── Interfaces ───────────────────────────────────────
plexar interfaces show leaf-01
plexar interfaces show leaf-01 --down-only
# ── Intent ───────────────────────────────────────────
plexar intent plan ./intent/bgp.yaml # preview, no changes
plexar intent apply ./intent/bgp.yaml # apply (prompts)
plexar intent apply ./intent/bgp.yaml --yes # apply (no prompt)
plexar intent verify ./intent/bgp.yaml # verify running state
# ── Topology ─────────────────────────────────────────
plexar topology discover # LLDP walk
plexar topology path leaf-01 firewall-01 # shortest path
plexar topology blast spine-01 # blast radius
# ── Snapshots ────────────────────────────────────────
plexar snapshot capture # save all device state
plexar snapshot capture --role spine # filtered
# ── Config ───────────────────────────────────────────
plexar config push spine-01 ./configs/spine-01.txt
# ── AI ───────────────────────────────────────────────
plexar ask "which leafs have BGP peers down?"
plexar ask "what is the total prefix count across all spines?"
plexar rca leaf-01 # root cause analysis
# ── Output formats ───────────────────────────────────
plexar --output json bgp fleet # JSON output
plexar --output yaml devices list # YAML output
plexar --no-color bgp fleet # no ANSI color
import asyncio
from plexar import Network
async def main():
net = Network()
net.inventory.load("yaml", path="inventory.yaml")
# ── Single device ─────────────────────────────────
device = net.inventory.get("spine-01")
async with device:
bgp = await device.get_bgp_summary()
interfaces = await device.get_interfaces()
info = await device.get_platform_info()
routes = await device.get_routing_table()
# ── Fleet concurrently ────────────────────────────
leafs = net.devices(role="leaf")
async with net.pool(max_connections=20) as pool:
results = await pool.gather(lambda d: d.get_bgp_summary())
# ── Intent ────────────────────────────────────────
from plexar.intent import Intent, BGPIntent, BGPNeighbor
intent = Intent(devices=leafs)
intent.ensure(BGPIntent(asn=65001, neighbors=[
BGPNeighbor(ip="10.0.0.1", remote_as=65000)
]))
plan = await intent.plan() # preview
result = await intent.apply() # apply
report = await intent.verify() # verify
# ── Digital Twin ──────────────────────────────────
from plexar.twin import DigitalTwin
twin = DigitalTwin()
await twin.capture(network=net)
result = twin.simulate_device_failure("spine-01")
print(f"Risk: {result.risk_score}/100")
# ── AI Query ──────────────────────────────────────
from plexar.ai import NetworkQuery
nq = NetworkQuery(network=net)
r = await nq.ask("which leafs have BGP peers down?")
print(r.answer)
asyncio.run(main())
# intent/leaf-baseline.yaml
devices:
role: leaf # target devices by role
site: dc1 # and/or site
ensure:
- type: bgp
asn: 65001
neighbors:
- ip: 10.0.0.1
remote_as: 65000
description: spine-01
- type: ntp
servers: [10.0.0.100, 10.0.0.101]
- type: banner
motd: "AUTHORIZED ACCESS ONLY"
- type: interface
name: Ethernet1
description: uplink-to-spine-01
admin_state: up
mtu: 9214
| platform: value | Vendor | Protocol | Output |
|---|---|---|---|
| arista_eos | Arista EOS | SSH/Scrapli | JSON (| json) |
| cisco_ios | Cisco IOS / IOS-XE | SSH/Scrapli | Regex/TextFSM |
| cisco_nxos | Cisco NX-OS | SSH/NX-API | JSON (| json) |
| juniper_junos | Juniper JunOS | SSH/Scrapli | XML (display xml) |
| Variable | Default | Description |
|---|---|---|
| PLEXAR_INVENTORY | inventory.yaml | Default inventory path for CLI |
| PLEXAR_AI_MODEL | gpt-4o-mini | AI model for parser/RCA/query |
| PLEXAR_AI_BACKEND | auto | auto | local | openai | anthropic |
| OPENAI_API_KEY | — | Required for OpenAI-backed AI features |
| ANTHROPIC_API_KEY | — | Alternative to OpenAI |
| VAULT_ADDR | — | HashiCorp Vault server URL |
| VAULT_TOKEN | — | Vault authentication token |
| NETBOX_TOKEN | — | NetBox API token |
| NAUTOBOT_TOKEN | — | Nautobot API token |
| PLEXAR_LOG_LEVEL | INFO | DEBUG | INFO | WARNING | ERROR |
| PLEXAR_PLATFORM_URL | — | Platform server URL (v2, reserved) |
| PLEXAR_PLATFORM_TOKEN | — | Platform auth token (v2, reserved) |
password_env: VAR_NAME in inventory.yaml — never password: literal_value. The sanitizer redacts credentials in logs, but they should never be in files committed to version control.
Plexar v1 is the tool. v2 adds the platform — an always-on server that wraps the same SDK. The tool works standalone forever; the platform adds persistence, HA, multi-user, and a web UI.
Drift detection, telemetry, gNMI subscriptions run continuously — not just when you run a command.
Login via SSO/OIDC. Full per-user audit trail. Four-eyes approval for high-risk operations.
Operations survive server restarts. Mid-flight intent applies resume from checkpoint. Active-passive failover.
Fleet health at a glance. Operation history. Compliance scores. Audit log search.
Teams are isolated at the data layer. Team A cannot see Team B's devices, operations, or audit logs.
Real-time inventory sync from NetBox/Nautobot change events — no polling required.
When you deploy the platform, upgrading the tool to use it requires changing three lines in ~/.plexar/config.yaml:
# Before: tool runs standalone
tool:
inventory: ./inventory.yaml
# After: tool routes through platform
tool:
inventory: ./inventory.yaml
platform:
url: https://plexar.corp.com
token_env: PLEXAR_PLATFORM_TOKEN
All CLI commands, all Python SDK calls, all intent YAML files remain unchanged. The execution moves to the platform. The interface stays the same.
| Package | Install | Contains |
|---|---|---|
| plexar-core | pip install plexar-core | SDK, drivers, security — no CLI |
| plexar | pip install plexar | plexar-core + CLI (today's package) |
| plexar-platform | pip install plexar-platform | API server, workers, web UI |
from plexar import Network works whether you have plexar or plexar-core installed. The tool and platform always ship the same version number. plexar-core 0.6.0 + plexar 0.6.0 + plexar-platform 0.6.0.
| Error | Cause | Fix |
|---|---|---|
| ModuleNotFoundError: click | CLI extra not installed | pip install plexar[cli] |
| ConnectionError: Authentication failed | Wrong credentials | Check password_env var is exported |
| DriverNotFound: unknown platform 'X' | Platform string typo or missing plugin | Check spelling: arista_eos, cisco_ios, cisco_nxos, juniper_junos |
| ValueError: Input failed sanitization | Hostname/command has shell metacharacters | Remove semicolons, pipes, backticks from input |
| PermissionError: Role VIEWER cannot push_config | RBAC context too low | Set PLEXAR_ROLE=ENGINEER in environment |
| AIParser: No API key configured | Missing LLM credentials | Set OPENAI_API_KEY or use --backend local |
| NetworkX not found | Topology extra missing | pip install plexar[topology] |
| Timeout on large fleet | Default pool too small | Increase: net.pool(max_connections=50) |
# Enable debug logging
export PLEXAR_LOG_LEVEL=DEBUG
plexar bgp show spine-01
# Or in Python
import logging
logging.basicConfig(level=logging.DEBUG)
# Test SSH manually
ssh -v admin@192.168.1.1
# Test from Python
python3 -c "
import asyncio, asyncssh
async def test():
async with asyncssh.connect('192.168.1.1', username='admin',
password='yourpassword',
known_hosts=None) as conn:
result = await conn.run('show version')
print(result.stdout[:200])
asyncio.run(test())
"