Metadata-Version: 2.4
Name: meshoptixiq-network-discovery
Version: 0.25.0
Summary: Vendor-agnostic network discovery and Neo4j graph ingestion pipeline
Author: MeshOptix Team
License: Proprietary
Project-URL: Homepage, https://meshoptixiq.com
Project-URL: Documentation, https://meshoptixiq.com/docs/
Project-URL: Repository, https://github.com/meshoptixiq/meshoptixiq
Project-URL: Bug Tracker, https://github.com/meshoptixiq/meshoptixiq/issues
Keywords: network,discovery,neo4j,graph,topology,cisco,netmiko
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: System Administrators
Classifier: Intended Audience :: Telecommunications Industry
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: System :: Networking
Classifier: Topic :: System :: Networking :: Monitoring
Classifier: Typing :: Typed
Requires-Python: >=3.10
Description-Content-Type: text/markdown
Requires-Dist: netmiko==4.6.0
Requires-Dist: pydantic<3.0.0,>=2.7.0
Requires-Dist: neo4j==5.15.0
Requires-Dist: pyyaml==6.0.3
Requires-Dist: fastapi==0.133.0
Requires-Dist: uvicorn[standard]==0.41.0
Requires-Dist: wsproto>=1.2.0
Requires-Dist: requests==2.32.5
Requires-Dist: cryptography>=42.0.0
Requires-Dist: pyjwt>=2.8.0
Requires-Dist: slowapi>=0.1.9
Requires-Dist: gunicorn>=22.0
Requires-Dist: tenacity>=8.2
Requires-Dist: lark>=1.1
Provides-Extra: postgres
Requires-Dist: psycopg[binary]==3.2.6; extra == "postgres"
Requires-Dist: psycopg-pool>=3.2; extra == "postgres"
Provides-Extra: mcp
Requires-Dist: mcp>=1.0.0; extra == "mcp"
Requires-Dist: anyio>=4.0; extra == "mcp"
Provides-Extra: enterprise
Requires-Dist: hvac>=2.0; extra == "enterprise"
Requires-Dist: boto3>=1.26; extra == "enterprise"
Requires-Dist: azure-keyvault-secrets>=4.8; extra == "enterprise"
Requires-Dist: azure-identity>=1.16; extra == "enterprise"
Requires-Dist: google-cloud-secret-manager>=2.20; extra == "enterprise"
Requires-Dist: authlib>=1.3; extra == "enterprise"
Requires-Dist: splunk-hec-handler>=1.0; extra == "enterprise"
Requires-Dist: elasticsearch>=8.0; extra == "enterprise"
Requires-Dist: opensearch-py>=2.0; extra == "enterprise"
Requires-Dist: opentelemetry-sdk>=1.20; extra == "enterprise"
Requires-Dist: opentelemetry-instrumentation-fastapi>=0.42b0; extra == "enterprise"
Requires-Dist: opentelemetry-exporter-otlp>=1.20; extra == "enterprise"
Requires-Dist: newrelic>=9.0; extra == "enterprise"
Requires-Dist: ddtrace>=2.0; extra == "enterprise"
Provides-Extra: integrations
Requires-Dist: httpx>=0.27; extra == "integrations"
Provides-Extra: ai
Requires-Dist: anthropic>=0.40; extra == "ai"
Requires-Dist: openai>=1.30; extra == "ai"
Provides-Extra: notifications
Requires-Dist: httpx>=0.27; extra == "notifications"
Provides-Extra: schedule
Requires-Dist: croniter>=1.3; extra == "schedule"
Provides-Extra: cluster
Requires-Dist: redis[asyncio]>=5.0; extra == "cluster"
Requires-Dist: coredis>=3.4.0; extra == "cluster"
Provides-Extra: metrics
Requires-Dist: prometheus-client>=0.20; extra == "metrics"
Provides-Extra: gnmi
Requires-Dist: grpcio>=1.60; extra == "gnmi"
Requires-Dist: protobuf>=4.0; extra == "gnmi"
Provides-Extra: netconf
Requires-Dist: ncclient>=0.6; extra == "netconf"
Provides-Extra: kafka
Requires-Dist: aiokafka>=0.9; extra == "kafka"
Provides-Extra: dev
Requires-Dist: pytest==9.0.2; extra == "dev"
Requires-Dist: pytest-asyncio>=0.23; extra == "dev"
Requires-Dist: pytest-cov>=5.0; extra == "dev"
Requires-Dist: ruff==0.9.6; extra == "dev"
Requires-Dist: mypy==1.19.1; extra == "dev"
Requires-Dist: httpx==0.28.1; extra == "dev"
Requires-Dist: fakeredis[aioredis]>=2.0; extra == "dev"
Requires-Dist: prometheus-client>=0.20; extra == "dev"
Requires-Dist: mcp>=1.0.0; extra == "dev"
Requires-Dist: anyio>=4.0; extra == "dev"
Requires-Dist: croniter>=1.3; extra == "dev"

# MeshOptixIQ Network Discovery

Vendor-agnostic network discovery and graph ingestion pipeline. Collects CLI output from network devices via Netmiko over SSH, parses it into canonical Pydantic models, normalizes and deduplicates the data, and ingests it into Neo4j or PostgreSQL.

```
┌─────────────┐     ┌──────────────┐     ┌────────────┐     ┌───────────────┐     ┌──────────┐
│  Inventory   │────▶│  Netmiko     │────▶│  Vendor    │────▶│ Normalization │────▶│ Neo4j /  │
│  (YAML)      │     │  Collector   │     │  Parsers   │     │ & Dedup       │     │ Postgres │
└─────────────┘     └──────────────┘     └────────────┘     └───────────────┘     └──────────┘
                          │                     │                    │
                          ▼                     ▼                    ▼
                    Raw CLI Output       Canonical Models     Validated Facts
                    (JSON files)         (Pydantic v2)        (Schema v1)
```

---

## Table of Contents

1. [Installation](#installation)
2. [Configuration](#configuration)
3. [CLI Usage](#cli-usage)
4. [API Server](#api-server)
5. [Query Library](#query-library)
6. [Docker Deployment](#docker-deployment)
7. [Cluster Mode](#cluster-mode)
8. [Kubernetes / Helm](#kubernetes--helm)
9. [Database Backends](#database-backends)
10. [Supported Vendors](#supported-vendors)
11. [Data Model](#data-model)
12. [Enterprise Features](#enterprise-features)
13. [MCP Server](#mcp-server)
14. [Development](#development)
15. [Licensing](#licensing)

---

## Installation

### Local (development)

```bash
cd network_discovery

# Core install (Neo4j backend)
pip install -e ".[dev]"

# With PostgreSQL support
pip install -e ".[dev,postgres]"

# With cluster (Redis) support
pip install -e ".[dev,cluster]"

# With MCP server support
pip install -e ".[dev,mcp]"

# All enterprise features
pip install -e ".[dev,enterprise]"
```

After installation the `meshq` CLI is available on your PATH.

### Docker

```bash
# From the repo root
docker build --target runtime -t meshoptixiq/meshoptixiq:latest .
docker build --target enterprise -t meshoptixiq/meshoptixiq:enterprise-latest .
```

---

## Configuration

### Core Environment Variables

| Variable | Default | Description |
|----------|---------|-------------|
| `API_KEY` | *(required)* | API key — server refuses to start without it |
| `GRAPH_BACKEND` | `neo4j` | `neo4j` or `postgres` |
| `MESHOPTIXIQ_LICENSE_KEY` | *(required by API server)* | License key; CLI and MCP no longer need this — they query the local API |
| `MESHOPTIXIQ_API_URL` | `http://localhost:8000` | Local API server URL used by CLI + MCP for plan resolution |
| `DEVICE_PASSWORD` | *(none)* | Default SSH password for collection |
| `CORS_ORIGINS` | *(empty — denied)* | Comma-separated allowed CORS origins |

### Neo4j

| Variable | Default | Description |
|----------|---------|-------------|
| `NEO4J_URI` | `bolt://localhost:7687` | Bolt connection URI |
| `NEO4J_USER` | `neo4j` | Username |
| `NEO4J_PASSWORD` | *(empty)* | Password (**required in production**) |

### PostgreSQL

| Variable | Default | Description |
|----------|---------|-------------|
| `POSTGRES_DSN` | `postgresql://postgres:postgres@localhost:5432/network_discovery` | Connection string |
| `POSTGRES_POOL_MIN` | `2` | Minimum pool connections |
| `POSTGRES_POOL_MAX` | `10` | Maximum pool connections |

### Clustering (Redis)

| Variable | Default | Description |
|----------|---------|-------------|
| `REDIS_URL` | *(none)* | Activates cluster mode when set |

### Authentication & RBAC

| Variable | Default | Description |
|----------|---------|-------------|
| `AUTH_MODE` | `api_key` | `api_key`, `oidc`, or `both` |
| `OIDC_DISCOVERY_URL` | *(none)* | OIDC provider discovery URL |
| `OIDC_CLIENT_ID` | *(none)* | OIDC client ID |
| `RBAC_POLICY_FILE` | *(none)* | Path to YAML RBAC policy file (hot-reloaded every 30 s) |
| `RBAC_POLICY` | *(none)* | Inline YAML RBAC policy string |
| `RBAC_RELOAD_INTERVAL` | `30` | Seconds between RBAC file mtime checks |

### Inventory File

```yaml
# inventory.yaml
devices:
  - hostname: core-sw-01
    host: 192.168.1.1
    vendor: cisco_ios
    username: admin
    password_env: DEVICE_PASSWORD   # reads from environment variable

  - hostname: fw-01
    host: 10.0.0.1
    vendor: paloalto_panos
    username: netops
    key_file: ~/.ssh/id_rsa         # SSH key auth
```

---

## CLI Usage

### `meshq ingest` — Full pipeline

```bash
meshq ingest --source inventory.yaml [--output-dir ./output] [--dry-run] [--workers N] [-v]
```

| Flag | Description |
|------|-------------|
| `--source PATH` | Inventory YAML file (required) |
| `--output-dir DIR` | Directory for raw JSON output (default: `output/`) |
| `--dry-run` | Collect, parse, normalize but skip database writes |
| `--workers N` | Concurrent SSH threads (default: 10) |
| `-v` | Debug logging |

Pipeline stages: API license check → collect → parse → normalize → ingest.

### `meshq collect` — Collection modes

```bash
# Standard: collect all devices in inventory
meshq collect --source inventory.yaml

# Dispatch: push device list to Redis queue, then exit
meshq collect --source inventory.yaml --dispatch

# Worker: pop devices from queue, collect, ingest, repeat
meshq collect --worker [--poll-interval 5]
```

| Flag | Description |
|------|-------------|
| `--source PATH` | Inventory YAML (required for standard + dispatch) |
| `--dispatch` | Push all devices to Redis queue and exit |
| `--worker` | Pop-collect-ingest loop (requires `REDIS_URL`) |
| `--poll-interval N` | Idle seconds between pops in `--worker` mode (default: 5) |

### `meshq parse` — Parse existing raw files

```bash
meshq parse --raw-dir output/raw/
```

### `meshq export` — Export Ansible inventory

```bash
meshq export --format ansible [--output inventory.json]
meshq export --format ansible --output inventory.ini   # legacy INI format
```

### `meshq sync` — NetBox sync

```bash
meshq sync --target netbox [--direction push|pull|both] [--dry-run]
```

### `meshq status` / `meshq version`

```bash
meshq status    # Check backend connectivity
meshq version   # Print version (no license required)
```

---

## API Server

### Starting the server

```bash
# Local
uvicorn network_discovery.api.main:app --host 0.0.0.0 --port 8000

# Docker
docker run -d -p 8000:8000 \
  -e NEO4J_URI=bolt://host.docker.internal:7687 \
  -e NEO4J_PASSWORD=secret \
  -e API_KEY=changeme \
  meshoptixiq/meshoptixiq:latest
```

Interactive Swagger docs: **http://localhost:8000/docs**

### Endpoints

#### Health

| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/health` | No | Process alive check |
| `GET` | `/health/ready` | No | Backend reachability; includes `pool_available` for postgres |
| `GET` | `/health/redis` | No | Redis cluster connectivity |
| `GET` | `/health/license` | No | License plan tier, expiry date, and days remaining |

#### Queries

| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/queries/` | Yes | List all 25 queries with metadata |
| `GET` | `/queries/{name}` | Yes | Query details |
| `POST` | `/queries/{name}/execute` | Yes | Execute with parameters |

#### Collection Queue (cluster mode)

| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/collect/status` | Yes | Queue depth + in-flight count |
| `POST` | `/collect/dispatch` | Yes | Push device list to queue |
| `POST` | `/collect/recover` | Yes | Re-queue stale in-flight devices |

#### Admin

| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/admin/config` | Yes | Sanitized runtime config (admin role) |
| `POST` | `/admin/rbac/reload` | Yes | Force RBAC policy reload + broadcast |

#### Other

| Method | Path | Auth | Description |
|--------|------|------|-------------|
| `GET` | `/auth/me` | Yes | Current user context |
| `GET` | `/events` | Yes | SSE stream (30 s keepalive) |
| `GET` | `/history/snapshots` | Yes | Historical metric snapshots |
| `GET` | `/history/diff` | Yes | Compare two snapshots by timestamp (`?from_ts=…&to_ts=…`) |
| `POST` | `/graph/whatif` | Yes | Simulate proposed NetworkFacts against current state (pro+ only) |
| `GET` | `/inventory/ansible` | Yes | Ansible dynamic inventory (`?format=ini`) |

### Authentication

`API_KEY` is required. Pass via `X-API-Key` header or `?api_key=` query parameter (for SSE EventSource):

```bash
curl -H "X-API-Key: $API_KEY" http://localhost:8000/queries/
```

---

## Query Library

25 queries across 7 categories, each with Cypher (Neo4j) and SQL (PostgreSQL) implementations.

### Topology

| Query | Parameters | Description |
|-------|-----------|-------------|
| `device_neighbors` | `device_name` | Devices directly connected to a given device |
| `interface_neighbors` | `device_a`, `device_b` | Interfaces connecting two devices |
| `topology_edges` | *(none)* | All edges in the topology graph |
| `topology_summary` | *(none)* | Device and link count summary |
| `all_devices` | *(none)* | All known devices |
| `topology_neighborhood` | `device`, `depth` | N-hop neighborhood subgraph around a focal device |

### Endpoints

| Query | Parameters | Description |
|-------|-----------|-------------|
| `locate_endpoint_by_ip` | `ip`, `vrf` *(optional)* | Find endpoint by IP address; filter by VRF |
| `locate_endpoint_by_mac` | `mac` | Find endpoint by MAC address |
| `endpoints_on_interface` | `device`, `interface` | Endpoints on a specific interface |

### Blast Radius

| Query | Parameters | Description |
|-------|-----------|-------------|
| `blast_radius_interface` | `device`, `interface` | Endpoints impacted by interface failure |
| `blast_radius_device` | `device` | Endpoints impacted by device failure |
| `blast_radius_vlan` | `vlan` | Endpoints in a VLAN |
| `blast_radius_subnet` | `cidr` | Endpoints in a subnet |

### Addressing

| Query | Parameters | Description |
|-------|-----------|-------------|
| `ips_in_subnet` | `cidr`, `vrf` *(optional)* | IPs allocated in a subnet; filter by VRF |
| `subnets_on_device` | `device` | Subnets present on a device |
| `orphaned_ips` | *(none)* | IPs with no known subnet |

### Hygiene

| Query | Parameters | Description |
|-------|-----------|-------------|
| `devices_without_neighbors` | *(none)* | Isolated devices |
| `interfaces_without_ips` | *(none)* | Interfaces missing IP assignments |
| `endpoints_without_location` | *(none)* | Endpoints with no physical port |

### Firewall

| Query | Parameters | Description |
|-------|-----------|-------------|
| `firewall_rules_by_device` | `device_name` | All rules for a firewall, ordered by precedence |
| `firewall_rules_by_zone_pair` | `src_zone`, `dst_zone` | Rules between a source/destination zone pair |
| `path_analysis` | `src_ip`, `dst_ip` | Rules matching traffic between two IPs |
| `all_firewall_devices` | *(none)* | All devices with collected firewall rules |
| `deny_rules_summary` | *(none)* | All deny/drop/reject rules across every firewall |

### Inventory

| Query | Parameters | Description |
|-------|-----------|-------------|
| `update_device_metadata` | *(none)* | Pull nb_site/tenant/rack from NetBox onto Device nodes |

**Example:**

```bash
curl -X POST http://localhost:8000/queries/blast_radius_device/execute \
  -H "Content-Type: application/json" \
  -H "X-API-Key: changeme" \
  -d '{"parameters": {"device": "core-sw-01"}}'
```

---

## Docker Deployment

```bash
# API + Neo4j
docker run -d -p 8000:8000 \
  -e NEO4J_URI=bolt://host.docker.internal:7687 \
  -e NEO4J_PASSWORD=secret \
  -e API_KEY=changeme \
  meshoptixiq/meshoptixiq:latest

# Collection job (API server must be running; CLI resolves plan from it)
docker run --rm \
  -e NEO4J_URI=bolt://host.docker.internal:7687 \
  -e NEO4J_PASSWORD=secret \
  -e MESHOPTIXIQ_API_URL=http://host.docker.internal:8000 \
  -v $(pwd)/configs:/app/configs \
  meshoptixiq/meshoptixiq:latest \
  meshq ingest --source /app/configs/inventory.yaml
```

---

## Cluster Mode

Multiple API pods share state via Redis. Set `REDIS_URL` to enable:

```bash
export API_KEY=secret
docker compose -f docker-compose.cluster.yml up --build
```

The cluster stack runs 3 API instances, 2 collection workers, a dispatcher, Redis, Neo4j, and nginx.

### How device distribution works

```
meshq collect --dispatch     →   meshq:collect_queue (Redis List)
                                          │ LPOP (atomic)
                          ┌───────────────┼───────────────┐
                       worker-1        worker-2        worker-N
                       (SSH → parse → ingest)
```

Each device is claimed by exactly one worker per collection cycle via `LPOP`. Stale in-flight items are re-queued automatically via `POST /collect/recover`.

### Queue Redis keys

| Key | Type | Purpose |
|-----|------|---------|
| `meshq:collect_queue` | List | Pending device JSON blobs |
| `meshq:collect_processing` | Hash | `hostname → {device_json, started_at}` |
| `meshq:collect_results` | Sorted Set | `hostname` → unix-ms of last success |
| `meshq:collect_lock` | String (NX TTL=60s) | One dispatcher at a time |

---

## Kubernetes / Helm

```bash
helm lint helm/meshoptixiq

helm install meshoptixiq helm/meshoptixiq \
  --set api.key=secret \
  --set neo4j.uri=bolt://neo4j:7687 \
  --set neo4j.password=secret \
  --set redis.url=redis://redis:6379 \
  --set collector.inventoryConfigMap=my-inventory
```

The chart deploys:
- API Deployment + HPA (CPU 70%, min 2 / max 10)
- Collector worker Deployment (2 replicas, `meshq collect --worker`)
- Dispatcher CronJob (`concurrencyPolicy: Forbid`)
- SSE-safe nginx Ingress
- ConfigMap + Secret (no inline secrets in Deployment)

---

## Database Backends

### Neo4j (default)

Set `GRAPH_BACKEND=neo4j`. Uses `MERGE` for idempotent ingestion and native graph relationships.

### PostgreSQL

Set `GRAPH_BACKEND=postgres`. Install the extra:

```bash
pip install -e ".[postgres]"
```

The provider uses a `ConnectionPool` (`psycopg-pool`) with configurable min/max size:

```bash
POSTGRES_POOL_MIN=2   # default
POSTGRES_POOL_MAX=10  # default
```

Tables are auto-created from `graph/postgres/schema.sql` on first connection. All writes use `INSERT ... ON CONFLICT` upserts inside explicit transactions.

---

## Supported Vendors

| Vendor | Key | Topology | Firewall Policies |
|--------|-----|----------|-------------------|
| Cisco IOS | `cisco_ios` | Interfaces, ARP, CDP, VLANs, MACs | — |
| Juniper JunOS | `juniper_junos` | Interfaces, ARP, LLDP | Security policies, address book |
| Arista EOS | `arista_eos` | Interfaces, ARP, LLDP | — |
| Aruba OS | `aruba_os` | Interfaces, ARP | — |
| Palo Alto PAN-OS | `paloalto_panos` | — | Security policies, address objects, service objects |
| Fortinet FortiGate | `fortinet` | — | Firewall policies, address objects |
| Cisco ASA | `cisco_asa` | — | ACLs, object groups |
| Cisco FTD | `cisco_ftd` | — | Policies |
| CheckPoint GAIA | `checkpoint_gaia` | — | Policies |

### Adding a New Vendor

1. Create `vendors/<vendor_key>/commands.yaml` with CLI commands.
2. Create parser classes under `parsers/<vendor_key>/` inheriting from `BaseParser`.
3. Decorate with `@register` for auto-registration.
4. Import in `parsers/<vendor_key>/__init__.py`.

---

## Data Model

### Core Entities

| Model | Key Fields | Description |
|-------|-----------|-------------|
| `DeviceModel` | `hostname` | Network device |
| `InterfaceModel` | `device_hostname` + `name` | Physical or logical interface |
| `IPAddressModel` | `address` + `vrf` | IPv4/IPv6 address on an interface; `vrf=None` treated as global routing table |
| `SubnetModel` | `network_address` + `vrf` | IP subnet; supports M&A multi-tenant isolation via `vrf` + `tenant` fields |
| `VLANModel` | `device_hostname` + `vlan_id` | VLAN membership |
| `MACModel` | `address` | Layer-2 MAC address |
| `EndpointModel` | `mac_address` + `ip_address` | Host inferred from ARP + MAC tables |

### Firewall Entities

| Model | Key Fields | Description |
|-------|-----------|-------------|
| `FirewallRuleModel` | `device_hostname` + `rule_id` | Security policy rule |
| `AddressObjectModel` | `device_hostname` + `name` | Named address object |
| `ServiceObjectModel` | `device_hostname` + `name` | Named service/port object |

### Graph Relationships

| Relationship | From → To | Description |
|-------------|-----------|-------------|
| `HAS_INTERFACE` | Device → Interface | Device owns this interface |
| `HAS_IP` | Interface → IPAddress | IP assigned to interface |
| `IN_SUBNET` | IPAddress → Subnet | IP belongs to subnet |
| `TAGGED_IN` | Interface → VLAN | Interface participates in VLAN |
| `CONNECTED_TO` | Interface → Interface | Physical link (CDP/LLDP) |
| `LEARNED_ON` | MAC → Interface | MAC seen on this port |
| `USES_IP` | Endpoint → IPAddress | Endpoint uses this IP |
| `USES_MAC` | Endpoint → MAC | Endpoint uses this MAC |
| `HAS_FIREWALL_RULE` | Device → FirewallRule | Device owns this rule |
| `HAS_ADDRESS_OBJECT` | Device → AddressObject | Device owns this address object |
| `HAS_SERVICE_OBJECT` | Device → ServiceObject | Device owns this service object |

---

## Enterprise Features

Install the enterprise extras:

```bash
pip install -e ".[enterprise]"
```

| Feature | Config |
|---------|--------|
| **OIDC authentication** | `AUTH_MODE=oidc`, `OIDC_DISCOVERY_URL`, `OIDC_CLIENT_ID` |
| **RBAC policy** | `RBAC_POLICY_FILE` or `RBAC_POLICY` (hot-reloaded every 30 s; instant cross-pod via Redis Pub/Sub) |
| **Audit logging** | `AUDIT_LOG_ENABLED=true` → Splunk, Elasticsearch, OpenSearch, webhook |
| **Secrets management** | `SECRETS_PROVIDER=vault\|aws\|azure\|gcp` |
| **Distributed tracing** | Auto-detected: Datadog → New Relic → OTLP |
| **NetBox sync** | `meshq sync --target netbox` |
| **SOAR webhooks** | `SOAR_RULES` JSON + `SOAR_WEBHOOK_URL` |
| **Ansible inventory** | `GET /inventory/ansible` or `meshq export --format ansible` |

---

## MCP Server

Exposes the network graph to AI assistants (Claude Desktop, Claude Code, any MCP client) without HTTP.

```bash
pip install -e ".[mcp]"

GRAPH_BACKEND=neo4j \
NEO4J_URI=bolt://localhost:7687 \
NEO4J_PASSWORD=password \
MESHOPTIXIQ_API_URL=http://localhost:8000 \
  meshq-mcp
```

23 tools across 7 categories (topology, endpoints, blast_radius, addressing, hygiene, firewall, inventory). Requires pro+ license (resolved from local API).

**Claude Desktop config:**

```json
{
  "mcpServers": {
    "meshoptixiq": {
      "command": "meshq-mcp",
      "env": {
        "GRAPH_BACKEND": "neo4j",
        "NEO4J_URI": "bolt://localhost:7687",
        "NEO4J_PASSWORD": "your-password",
        "MESHOPTIXIQ_API_URL": "http://localhost:8000"
      }
    }
  }
}
```

---

## Development

### Running Tests

```bash
cd network_discovery
pytest tests/ -v
# 1346 tests — unit, integration, e2e
```

Neither `psycopg`/`psycopg_pool` nor a real database are required for the test suite — all database interactions are mocked.

### Project Structure

```
network_discovery/
├── network_discovery/
│   ├── api/
│   │   ├── main.py                 # FastAPI app, lifespan, middleware
│   │   ├── auth.py                 # API key + OIDC authentication
│   │   ├── cluster.py              # Redis client factory
│   │   ├── collect_queue.py        # Redis work queue (dispatch/pop/complete)
│   │   ├── dependencies.py         # DB provider injection
│   │   ├── snapshot_store.py       # Metric ring buffer (local + Redis)
│   │   └── routes/
│   │       ├── queries.py          # /queries/* endpoints
│   │       ├── health.py           # /health/* endpoints
│   │       ├── admin.py            # /admin/* endpoints
│   │       ├── collect.py          # /collect/* endpoints
│   │       ├── events.py           # SSE /events endpoint
│   │       ├── history.py          # /history/snapshots + /history/diff endpoints
│   │       ├── whatif.py           # /graph/whatif endpoint (what-if impact analysis)
│   │       └── inventory.py        # /inventory/ansible endpoint
│   ├── collectors/
│   │   └── netmiko_collector.py    # SSH collection + --worker loop
│   ├── enterprise/
│   │   ├── auth/
│   │   │   ├── oidc.py             # OIDC JWT validation
│   │   │   └── rbac.py             # RBAC policy loader (mtime + pub/sub)
│   │   ├── audit/logger.py         # Splunk/ES/webhook audit
│   │   ├── integrations/
│   │   │   ├── netbox.py           # NetBox sync
│   │   │   └── soar.py             # SOAR webhook dispatcher
│   │   ├── observability/tracing.py  # DD/NR/OTLP tracing
│   │   └── secrets/resolver.py     # Vault/AWS/Azure/GCP secrets
│   ├── graph/
│   │   ├── provider.py             # Abstract GraphProvider interface
│   │   ├── neo4j_provider.py       # Neo4j MERGE-based ingestion
│   │   └── postgres_provider.py    # PostgreSQL pool + ON CONFLICT
│   ├── mcp/
│   │   └── server.py               # MCP server (23 tools)
│   ├── parsers/
│   │   ├── base.py                 # BaseParser ABC
│   │   ├── registry.py             # @register decorator + dispatch
│   │   ├── cisco_ios/
│   │   ├── juniper_junos/
│   │   ├── arista_eos/
│   │   ├── paloalto_panos/         # security policies, address/service objects
│   │   ├── fortinet/               # firewall policies, address objects
│   │   ├── cisco_asa/              # ACLs, object groups
│   │   ├── cisco_ftd/
│   │   └── checkpoint_gaia/
│   ├── normalization/
│   │   ├── models.py               # Canonical Pydantic models (incl. firewall)
│   │   └── normalize.py            # Dedup, subnet derivation, integrity
│   ├── queries/
│   │   ├── registry.yaml           # 25 query definitions
│   │   └── v1/                     # Cypher + SQL per query
│   ├── licensing/
│   │   ├── verifier.py             # License client (Cython-compiled)
│   │   └── gates.py                # Plan feature gating (Cython-compiled)
│   ├── local_api_client.py         # get_license_status() — CLI + MCP license helper
│   ├── cli.py                      # meshq entry point
│   ├── ingest.py                   # Ingestion orchestrator
│   └── scanner.py                  # CIDR TCP scanner
├── tests/                          # 1346 tests (unit + integration + e2e)
├── pyproject.toml
└── README.md
```

### Code Quality

```bash
ruff check .
mypy network_discovery/
```

---

## Licensing

Only the **API server** requires a license key. The CLI and MCP resolve their plan tier by calling `GET /health/license` on the local API, so a single key on the API server covers all tools.

```bash
# Set on the API server
export MESHOPTIXIQ_LICENSE_KEY=your-key

# Persistent file on the API server host
mkdir -p ~/.meshoptixiq
echo "your-key" > ~/.meshoptixiq/license.key
chmod 600 ~/.meshoptixiq/license.key
```

Point the CLI and MCP at the API server if it is not running locally:

```bash
export MESHOPTIXIQ_API_URL=http://api-server:8000
```

| Tier | Devices | Network Devices | API Queries | Firewall Queries | MCP Server |
|------|---------|-----------------|-------------|------------------|------------|
| Community | 1 | 1 | — | — | — |
| Starter | 1 | 100 | — | — | — |
| Pro | 5 | 750 | ✅ | ✅ | ✅ |
| Enterprise | Unlimited | Unlimited | ✅ | ✅ | ✅ |

Check the current plan and expiry at any time:

```bash
meshq license           # shows plan, expiry, feature flags
curl http://localhost:8000/health/license
```
