Metadata-Version: 2.4
Name: sfacts
Version: 2.5.0
Summary: Modern network discovery and fact collection toolkit with secure credential management and structured logging
Project-URL: Homepage, https://gitlab.com/netodata/simple-facts
Project-URL: Repository, https://gitlab.com/netodata/simple-facts
Project-URL: Issues, https://gitlab.com/netodata/simple-facts/-/issues
Project-URL: Changelog, https://gitlab.com/netodata/simple-facts/-/blob/main/CHANGELOG.md
Author-email: Milan Zapletal <info@netodata.io>
License: MIT
License-File: LICENSE
Keywords: automation,discovery,facts,netmiko,network,ssh,textfsm
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: System Administrators
Classifier: Intended Audience :: Telecommunications Industry
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: System :: Networking
Classifier: Topic :: System :: Systems Administration
Requires-Python: >=3.9
Requires-Dist: click>=8.0.0
Requires-Dist: keyring>=25.5.0
Requires-Dist: loguru>=0.7.3
Requires-Dist: napalm>=4.1.0
Requires-Dist: netmiko>=4.2.0
Requires-Dist: networklab>=26.3
Requires-Dist: nornir-napalm>=0.4.0
Requires-Dist: nornir-utils>=0.2.0
Requires-Dist: nornir>=3.3.0
Requires-Dist: ntc-templates>=3.5.0
Requires-Dist: paramiko>=3.3.1
Requires-Dist: pynautobot>=2.6.1
Requires-Dist: pynetbox>=7.5.0
Requires-Dist: python-dotenv>=1.0.0
Requires-Dist: rich>=13.0.0
Requires-Dist: textfsm>=1.1.2
Requires-Dist: types-pyyaml>=6.0.0
Provides-Extra: community
Requires-Dist: napalm-huawei-vrp; extra == 'community'
Requires-Dist: napalm-panos; extra == 'community'
Requires-Dist: napalm-ros; extra == 'community'
Requires-Dist: napalm-sros; extra == 'community'
Provides-Extra: workflows
Requires-Dist: prefect>=3.0.0; (python_version >= '3.9') and extra == 'workflows'
Description-Content-Type: text/markdown

# Simple Facts - Network Discovery & Fact Collection Toolkit

[![PyPI](https://img.shields.io/pypi/v/sfacts)](https://pypi.org/project/sfacts/)
[![Python](https://img.shields.io/pypi/pyversions/sfacts)](https://pypi.org/project/sfacts/)
[![License](https://img.shields.io/pypi/l/sfacts)](https://gitlab.com/netodata/simple-facts/-/blob/main/LICENSE)

Automated network inventory toolkit that syncs your actual network data to **NetBox** *or* **Nautobot** IPAM in minutes. Discover devices, collect facts, and eliminate manual documentation drift. Built on Netmiko, TextFSM, PyNetBox, and PyNautobot. Recently updated with Prefect.

## Features

- **🔍 Network Discovery**: Automatic device discovery with SSH port scanning
- **🔧 Device Identification**: Auto-detect device types using Netmiko SSHDetect
- **📊 Fact Collection**: Collect structured device information using TextFSM
- **🏢 NetBox Integration**: Complete IPAM/DCIM synchronization with VRF support
- **🤖 Nautobot Integration**: 1:1 equivalent of the NetBox sync via the official `pynautobot` SDK (Nautobot 2.x/3.x — locations, namespaces, statuses, IP↔Interface assignments)
- **🔄 Prefect Workflows**: Modern workflow orchestration with automated pipelines and beautiful UI
- ** Secure Credentials**: OS-native keyring storage with .env file support
- **🚀 High Performance**: Parallel processing with configurable workers
- **📝 Structured Logging**: Professional logging with Loguru
- **🌐 SSH Tunnel Support**: Test mode for containerlab and remote environments
- **📁 Flexible Output**: Custom output directories with `--output-dir` option
- **🧹 Code Quality**: Ruff linting and formatting for clean, maintainable code
- **⚙️ CI/CD Pipeline**: GitLab CI/CD with lint, test, security, and build stages
- **🏗️ Modern Architecture**: Built with uv, Loguru, and industry standards

## Architecture

```
-- Code Building Blocks

┌─────────────────────────────────────────────────────────────────┐
│                         sfacts/                                 │
│  ┌─────────────────────────────────────────────────────────┐    │
│  │  sfacts/core/                                           │    │
│  │  • scanner.py      - Network discovery & port scanning  │    │
│  │  • collector.py    - Fact collection & TextFSM parsing  │    │
│  │  • netbox/         - NetBox sync modules                │    │
│  │  • nautobot/       - Nautobot sync modules (pynautobot) │    │
│  │  • utils/         - Auth, logging, helpers              │    │
│  └─────────────────────────────────────────────────────────┘    │
│  ┌─────────────────────────────────────────────────────────┐    │
│  │  sfacts/tasks/                                          │    │
│  │  • collect.py      - Fact collection task               │    │
│  │  • netbox.py       - NetBox sync task                   │    │
│  │  • nautobot.py     - Nautobot sync task                 │    │
│  └─────────────────────────────────────────────────────────┘    │
│  ┌─────────────────────────────────────────────────────────┐    │
│  │  sfacts/flows/                                          │    │
│  │  • modular_flows.py - Prefect flow definitions          │    │
│  └─────────────────────────────────────────────────────────┘    │
│  ┌─────────────────────────────────────────────────────────┐    │
│  │  sfacts/cli/                                            │    │
│  │  • main.py         - CLI entry point                    │    │
│  └─────────────────────────────────────────────────────────┘    │
└─────────────────────────────────────────────────────────────────┘

┌─────────────────────────────────────────────────────────────────┐
│                      Prefect Orchestration                      │
│  • serve.py        - Deploy Prefect flows                       │
│  • deployments.py  - Flow deployment configuration              │
│  • UI @ localhost:4200 for workflow management                  │
└─────────────────────────────────────────────────────────────────┘

```

## CLI Commands

| Command | Purpose | Example |
|---------|---------|---------|
| `sfacts discover` | Network device discovery | `sfacts discover --subnet 192.168.1.0/24` |
| `sfacts facts` | Enhanced fact collection | `sfacts facts --input discovery_results.json` |
| `sfacts export` | Export to Nornir inventory | `sfacts export --output-dir inventory` |
| `sfacts netbox sync` | Sync facts to NetBox IPAM/DCIM | `sfacts netbox sync --site-name "Production"` |
| `sfacts netbox test` | Test NetBox API connection | `sfacts netbox test` |
| `sfacts netbox status` | Show NetBox instance statistics | `sfacts netbox status` |
| `sfacts netbox purge` | Delete NetBox data (DESTRUCTIVE) | `sfacts netbox purge --dry-run` |
| `sfacts netbox validate` | Validate facts for NetBox sync | `sfacts netbox validate` |
| `sfacts nautobot sync` | Sync facts to Nautobot IPAM/DCIM | `sfacts nautobot sync --site-name "Production"` |
| `sfacts nautobot test` | Test Nautobot API connection | `sfacts nautobot test` |
| `sfacts nautobot status` | Show Nautobot instance statistics | `sfacts nautobot status` |
| `sfacts nautobot purge` | Delete Nautobot data (DESTRUCTIVE) | `sfacts nautobot purge --dry-run` |

### Discovery Modes

- **Subnet Discovery**: Scan network ranges for SSH-enabled devices
- **SSH Tunnel Mode**: Connect through SSH tunnels for testing (containerlab, etc.)

## Installation

This project uses [uv](https://docs.astral.sh/uv/) for fast Python package management. The project requires Python 3.9 or higher (`pynautobot` and Prefect both require 3.9+).

### Install uv (if not already installed)
```bash
curl -LsSf https://astral.sh/uv/install.sh | sh
```

### Install dependencies
```bash
uv sync
```

### Alternative: Using pip (legacy)
```bash
pip install -r requirements.txt
```

## Quick Start

### 1. Install Dependencies
```bash
uv sync
```

### 2. Set up credentials (choose one method):

**Option A: Environment Variables**
```bash
export NETOPS_USERNAME=your_username
export NETOPS_PASSWORD=your_password
```

**Option B: Secure Keyring Storage**
```bash
# Store credentials securely in OS keyring
uv run python -c "
from sfacts.utils.auth import CredentialManager
cm = CredentialManager()
cm.store_credentials('your_username', 'your_password')
"
```

### 3. Configure NetBox Integration (Optional)

Create a `.env` file for NetBox integration:
```bash
# Copy template and configure
cp .env.template .env

# Edit .env with your NetBox details
NETBOX_URL=https://your-netbox-instance.com
NETBOX_TOKEN=your-api-token-here
```

### 4. Discover Network Devices

**Subnet Discovery**
```bash
# Discover devices in a network range
uv run sfacts discover --subnet 192.168.1.0/24

# Save results to specific directory
uv run sfacts discover --subnet 192.168.1.0/24 --output-dir ./network_scans

# High-performance scanning with custom output
uv run sfacts discover --subnet 10.0.0.0/16 --workers 50 --timeout 15 --output-dir ./results
```

**SSH Tunnel Mode** (for testing/containerlab)
```bash
# Connect through SSH tunnels
uv run sfacts discover --tunnel-host localhost --tunnel-ports 2101-2116
```

### 5. Collect Device Facts
```bash
# Collect facts from discovered devices
uv run sfacts facts --input disco_results_*.json

# Save facts to specific directory
uv run sfacts facts --input disco_results.json --output-dir ./network_facts

# Direct subnet fact collection with custom output
uv run sfacts facts --subnet 192.168.1.0/24 --workers 50 --output-dir ./results
```

### 6. NetBox Integration (Optional)

**Test NetBox Connection**
```bash
# Test API connectivity
uv run sfacts netbox test

# Check NetBox instance status and data statistics
uv run sfacts netbox status
```

**Synchronize to NetBox IPAM/DCIM**
```bash
# Dry-run to preview changes
uv run sfacts netbox sync --site-name "Production" --dry-run

# Sync device facts to NetBox
uv run sfacts netbox sync --site-name "Production"

# Validate facts data before sync
uv run sfacts netbox validate
```

**NetBox Data Management**
```bash
# Preview what would be deleted (SAFE)
uv run sfacts netbox purge --dry-run

# Delete all NetBox data (DESTRUCTIVE - requires confirmation)
uv run sfacts netbox purge

# Delete specific site data only
uv run sfacts netbox purge --site-name "Lab Environment" --dry-run
```

### 6b. Nautobot Integration (alternative to NetBox)

The Nautobot backend mirrors the NetBox surface area 1:1 and uses the official
[`pynautobot`](https://github.com/nautobot/pynautobot) SDK end-to-end. Tested
against **Nautobot 3.x**.

Set credentials via environment (or `.env`):

```bash
export NAUTOBOT_URL=http://localhost:8080
export NAUTOBOT_TOKEN=your-api-token
```

**Run a local Nautobot for development**

A turnkey Nautobot 3.1 lab is provided under `labs/nautobot/`:

```bash
docker compose -f labs/nautobot/docker-compose.yml up -d
./labs/nautobot/bootstrap.sh         # creates admin/admin + API token
uv run sfacts nautobot test
```

**Synchronize to Nautobot**

```bash
# Test connection / inspect statistics
uv run sfacts nautobot test
uv run sfacts nautobot status

# Dry-run preview
uv run sfacts nautobot sync --site-name "Production" --dry-run

# Real sync (standard mode)
uv run sfacts nautobot sync --site-name "Production"

# Bulk mode (10-50x faster, optional --adaptive batching)
uv run sfacts nautobot sync --site-name "Production" --bulk
```

**Nautobot data management**

```bash
# Preview what would be deleted
uv run sfacts nautobot purge --dry-run

# Scoped purge (one Location and its dependents)
uv run sfacts nautobot purge --site-name "Lab Environment"

# Full purge (DESTRUCTIVE; also removes the auto-created LocationType)
uv run sfacts nautobot purge
```

> **Schema notes (Nautobot 3.x):** Sites are modeled as **Locations** (under
> a `LocationType` named `Site` that we auto-provision); device roles live in
> the generic `extras.roles` registry; prefixes/IPs live in a **Namespace**
> (default `Global`); IP↔Interface assignments use the
> `ipam.ip_address_to_interface` join model; and Nautobot stores MAC addresses
> directly on `Interface.mac_address` (no separate MAC repository).

### 7. Prefect Workflow Orchestration (Optional)

**Install Prefect workflows**
```bash
uv sync --extra workflows
```

**Start Prefect server and deployments**
```bash
# Terminal 1: Start Prefect server
prefect server start

# Terminal 2: Start deployments
uv run python serve.py
```

**Access Prefect UI**
- Open http://localhost:4200
- Run deployments from the UI

**Seven deployments available:**

*Pipeline:*
1. **discover-and-collect** — Main entry point: discover + collect facts as subflows (one click)
2. **discover-only** — Discovery only (re-runs, topology changes)
3. **collect-only** — Fact collection only (uses latest `devices-json` artifact)
4. **sync-to-netbox** — Push facts to NetBox IPAM. Mandatory preflight + postflight with before/after diff
5. **sync-to-nautobot** — Push facts to Nautobot IPAM/DCIM. Mandatory preflight + postflight with before/after diff

*Admin:*
6. **purge-netbox** — ⚠️ Destructive NetBox cleanup. `dry_run=True` by default. Mandatory preflight + postflight
7. **purge-nautobot** — ⚠️ Destructive Nautobot cleanup. `dry_run=True` by default. Mandatory preflight + postflight

**Example workflow:**
```bash
# Run from Prefect UI (http://localhost:4200):
# 1. discover-and-collect with target: "192.168.121.0/24"
#    → produces devices-json + facts-json artifacts
# 2. sync-to-netbox (automatically uses latest facts)
```

**Benefits:**
- ✅ Single-click pipeline (discover + collect combined)
- ✅ Zero manual JSON copying between steps
- ✅ Before/after NetBox object-count diff on every sync and purge
- ✅ Preflight abort if NetBox is unreachable — no blind writes
- ✅ Individual steps (`discover-only`, `collect-only`) for targeted re-runs

See **[Prefect Modular Workflows](docs/prefect-modular-workflows.md)** for complete documentation.

## Documentation

- **[Usage Guide](docs/usage-guide.md)** - Detailed usage instructions and examples
- **[NetBox Integration](docs/netbox-integration.md)** - Complete NetBox IPAM/DCIM integration guide
- **[NetBox Dependency Map](docs/netbox-dependency-map.md)** - NetBox sync system architecture and data flow
- **[Prefect Integration](docs/prefect-integration.md)** - Legacy Prefect flows documentation
- **[Prefect Modular Workflows](docs/prefect-modular-workflows.md)** - New modular deployment architecture
- **[CI/CD Pipeline](docs/CI-CD-SETUP.md)** - GitLab CI/CD pipeline setup and configuration
- **[Inventory Structure](docs/inventory-structure.md)** - Complete inventory configuration reference
- **[Troubleshooting](docs/troubleshooting.md)** - Common issues and solutions

## Output Structure

```
output/
├── disco_results_YYYYMMDD_HHMMSS.json        # Network discovery results
├── facts_results_YYYYMMDD_HHMMSS.json        # Device facts with TextFSM parsing
└── report/
    └── network_report_YYYYMMDD_HHMMSS.md     # Comprehensive network report
```

## Environment Variables

The following environment variables are supported:

**Device Authentication:**
- **`NETOPS_USERNAME`** - SSH username for device authentication
- **`NETOPS_PASSWORD`** - SSH password for device authentication  
- **`NETOPS_SECRET`** - Enable secret for privileged access (optional)

**NetBox Integration:**
- **`NETBOX_URL`** - NetBox instance URL (e.g., https://netbox.company.com)
- **`NETBOX_TOKEN`** - NetBox API token for authentication
- **`NETBOX_VERIFY_SSL`** - SSL certificate verification (default: true)

## Development

### Code Quality Tools

This project uses modern Python development tools:

```bash
# Install development dependencies
uv sync --dev

# Run linting and formatting
uv run ruff check sfacts/ tests/
uv run ruff check --fix sfacts/ tests/
uv run ruff format sfacts/ tests/

# Type checking
uv run mypy sfacts/

# Security scanning
uv run bandit -r sfacts/

# Run tests with coverage
uv run pytest
```

### CI/CD Pipeline

The project includes a GitLab CI/CD pipeline (`.gitlab-ci.yml`) with four stages:

1. **Lint** - Ruff linting, formatting checks, MyPy type checking
2. **Test** - Unit tests with coverage reporting
3. **Security** - Dependency vulnerability scanning, Bandit security analysis, code quality metrics
4. **Build** - Python package creation (main branch and tags)

See **[CI/CD Pipeline](docs/CI-CD-SETUP.md)** for full setup details.

## Key Technologies

- **[uv](https://docs.astral.sh/uv/)** - Fast Python package management
- **[Prefect](https://docs.prefect.io/)** - Modern workflow orchestration and scheduling
- **[Loguru](https://loguru.readthedocs.io/)** - Structured logging with colors and context
- **[Keyring](https://keyring.readthedocs.io/)** - Secure OS-native credential storage
- **[python-dotenv](https://python-dotenv.readthedocs.io/)** - Environment variable management from .env files
- **[Netmiko](https://ktbyers.github.io/netmiko/)** - Multi-vendor SSH device connections
- **[TextFSM](https://github.com/google/textfsm)** - Structured parsing with ntc-templates
- **[pynetbox](https://pynetbox.readthedocs.io/)** - NetBox API client for IPAM/DCIM integration
- **[Rich](https://rich.readthedocs.io/)** - Beautiful progress bars and terminal output

## Additional Features

### Nornir Inventory Export
Export collected facts to Nornir inventory format for integration with Nornir-based automation:

```bash
# Export latest facts to Nornir inventory
uv run sfacts export

# Export specific facts file
uv run sfacts export --input-file facts_results_20250124.json

# Include credentials in defaults.yaml
uv run sfacts export --include-credentials
```

See [inventory/README.md](inventory/README.md) for details on using exported inventory files.

## Contributing

1. Fork the repository
2. Create a feature branch
3. Make your changes
4. Add tests if applicable
5. Submit a pull request

## License

This project is licensed under the MIT License - see the LICENSE file for details.

---

<p align="center">
  Built and maintained by <a href="https://netodata.io"><strong>NETODATA</strong></a> — network automation, done right.
</p>
