Metadata-Version: 2.4
Name: kestrel_sovereign
Version: 0.4.1
Summary: Kestrel Sovereign AI Agent Framework - Constitutional AI with cryptographic identity
Project-URL: Homepage, https://kestrelsovereign.com
Project-URL: Source, https://github.com/KestrelSovereignAI/kestrel-sovereign
Project-URL: Issues, https://github.com/KestrelSovereignAI/kestrel-sovereign/issues
Project-URL: Documentation, https://github.com/KestrelSovereignAI/kestrel-sovereign/blob/main/README.md
Project-URL: Discussions, https://github.com/KestrelSovereignAI/kestrel-sovereign/discussions
Author: UncleSaurus
Maintainer: UncleSaurus
License-Expression: Apache-2.0
License-File: LICENSE
Keywords: agents,ai,constitutional-ai,cryptographic-identity,did,llm,open-source,sovereign
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Operating System :: MacOS
Classifier: Operating System :: Microsoft :: Windows
Classifier: Operating System :: POSIX :: Linux
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Application Frameworks
Classifier: Typing :: Typed
Requires-Python: <3.14,>=3.11
Requires-Dist: aiohttp>=3.13.3
Requires-Dist: aiosqlite>=0.21.0
Requires-Dist: anthropic>=0.75.0
Requires-Dist: asyncpg>=0.30.0
Requires-Dist: authlib>=1.3.0
Requires-Dist: beautifulsoup4==4.13.4
Requires-Dist: cbor2>=5.6.0
Requires-Dist: certifi==2026.1.4
Requires-Dist: charset-normalizer==3.4.4
Requires-Dist: cryptography>=45.0.5
Requires-Dist: email-validator>=2.3.0
Requires-Dist: fastapi==0.128.1
Requires-Dist: google-auth>=2.40.3
Requires-Dist: google-cloud-logging>=3.5.0
Requires-Dist: google-cloud-storage>=3.4.1
Requires-Dist: google-genai>=1.0.0
Requires-Dist: google-generativeai>=0.8.6
Requires-Dist: httpx>=0.27.0
Requires-Dist: idna==3.11
Requires-Dist: iniconfig==2.1.0
Requires-Dist: itsdangerous>=2.2.0
Requires-Dist: kestrel-sovereign-sdk<1,>=0.1
Requires-Dist: networkx==3.5
Requires-Dist: numpy>=2.3.1
Requires-Dist: ollama>=0.6.1
Requires-Dist: openai==1.93.2
Requires-Dist: packaging==24.1
Requires-Dist: pillow>=12.1.1
Requires-Dist: pluggy==1.6.0
Requires-Dist: pqcrypto>=0.4.0
Requires-Dist: psutil>=7.1.3
Requires-Dist: psycopg2-binary>=2.9.10
Requires-Dist: pycryptodome==3.23.0
Requires-Dist: pydantic==2.12.5
Requires-Dist: pygments==2.19.2
Requires-Dist: pyjwt[crypto]>=2.8.0
Requires-Dist: python-dotenv==1.1.1
Requires-Dist: python-multipart>=0.0.20
Requires-Dist: pyyaml>=6.0
Requires-Dist: questionary>=2.0.1
Requires-Dist: rank-bm25>=0.2.2
Requires-Dist: redis>=6.4.0
Requires-Dist: requests>=2.32.5
Requires-Dist: slowapi>=0.1.9
Requires-Dist: tiktoken==0.9.0
Requires-Dist: toml==0.10.2
Requires-Dist: urllib3>=2.6.3
Requires-Dist: uvicorn==0.35.0
Requires-Dist: webrtcvad>=2.0.10
Provides-Extra: all-features
Requires-Dist: accelerate>=1.12.0; extra == 'all-features'
Requires-Dist: azure-identity>=1.15.0; extra == 'all-features'
Requires-Dist: azure-mgmt-appcontainers>=3.0.0; extra == 'all-features'
Requires-Dist: chromadb==1.0.15; extra == 'all-features'
Requires-Dist: deepgram-sdk>=4.0.0; extra == 'all-features'
Requires-Dist: diffusers>=0.36.0; extra == 'all-features'
Requires-Dist: elevenlabs>=1.0.0; extra == 'all-features'
Requires-Dist: faster-whisper>=1.1.0; extra == 'all-features'
Requires-Dist: google-cloud-compute>=1.15.0; extra == 'all-features'
Requires-Dist: google-cloud-run>=0.10.0; extra == 'all-features'
Requires-Dist: langchain-community>=0.3.31; extra == 'all-features'
Requires-Dist: langchain>=0.3.31; extra == 'all-features'
Requires-Dist: numpy==2.3.1; extra == 'all-features'
Requires-Dist: ollama>=0.6.1; extra == 'all-features'
Requires-Dist: opentelemetry-api>=1.20.0; extra == 'all-features'
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc>=1.20.0; extra == 'all-features'
Requires-Dist: opentelemetry-instrumentation-fastapi>=0.41b0; extra == 'all-features'
Requires-Dist: opentelemetry-sdk>=1.20.0; extra == 'all-features'
Requires-Dist: peft>=0.18.1; extra == 'all-features'
Requires-Dist: piper-tts>=1.2.0; extra == 'all-features'
Requires-Dist: prometheus-client>=0.21.0; extra == 'all-features'
Requires-Dist: replicate>=1.0.4; extra == 'all-features'
Requires-Dist: resend>=2.19.0; extra == 'all-features'
Requires-Dist: runpod>=1.8.1; extra == 'all-features'
Requires-Dist: safetensors>=0.7.0; extra == 'all-features'
Requires-Dist: sendgrid>=6.12.5; extra == 'all-features'
Requires-Dist: sentence-transformers==3.0.1; extra == 'all-features'
Requires-Dist: soundfile>=0.13.0; extra == 'all-features'
Requires-Dist: spacy>=3.7.0; extra == 'all-features'
Requires-Dist: stripe>=10.0.0; extra == 'all-features'
Requires-Dist: tavily-python>=0.3.0; extra == 'all-features'
Requires-Dist: torch>=2.9.1; extra == 'all-features'
Requires-Dist: torchaudio>=2.9.1; extra == 'all-features'
Requires-Dist: torchvision>=0.24.1; extra == 'all-features'
Requires-Dist: transformers>=4.57.1; extra == 'all-features'
Requires-Dist: twilio>=9.9.1; extra == 'all-features'
Requires-Dist: vastai-sdk>=0.1.0; extra == 'all-features'
Requires-Dist: web3>=7.0.0; extra == 'all-features'
Provides-Extra: cloud
Requires-Dist: azure-identity>=1.15.0; extra == 'cloud'
Requires-Dist: azure-mgmt-appcontainers>=3.0.0; extra == 'cloud'
Requires-Dist: google-cloud-compute>=1.15.0; extra == 'cloud'
Requires-Dist: google-cloud-run>=0.10.0; extra == 'cloud'
Requires-Dist: runpod>=1.8.1; extra == 'cloud'
Requires-Dist: vastai-sdk>=0.1.0; extra == 'cloud'
Provides-Extra: delivery
Requires-Dist: resend>=2.19.0; extra == 'delivery'
Requires-Dist: sendgrid>=6.12.5; extra == 'delivery'
Requires-Dist: twilio>=9.9.1; extra == 'delivery'
Provides-Extra: full
Requires-Dist: accelerate>=1.12.0; extra == 'full'
Requires-Dist: asgi-lifespan>=2.1.0; extra == 'full'
Requires-Dist: azure-identity>=1.15.0; extra == 'full'
Requires-Dist: azure-mgmt-appcontainers>=3.0.0; extra == 'full'
Requires-Dist: chromadb==1.0.15; extra == 'full'
Requires-Dist: deepgram-sdk>=4.0.0; extra == 'full'
Requires-Dist: diffusers>=0.36.0; extra == 'full'
Requires-Dist: elevenlabs>=1.0.0; extra == 'full'
Requires-Dist: faster-whisper>=1.1.0; extra == 'full'
Requires-Dist: google-cloud-compute>=1.15.0; extra == 'full'
Requires-Dist: google-cloud-run>=0.10.0; extra == 'full'
Requires-Dist: langchain-community>=0.3.31; extra == 'full'
Requires-Dist: langchain>=0.3.31; extra == 'full'
Requires-Dist: numpy==2.3.1; extra == 'full'
Requires-Dist: ollama>=0.6.1; extra == 'full'
Requires-Dist: opentelemetry-api>=1.20.0; extra == 'full'
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc>=1.20.0; extra == 'full'
Requires-Dist: opentelemetry-instrumentation-fastapi>=0.41b0; extra == 'full'
Requires-Dist: opentelemetry-sdk>=1.20.0; extra == 'full'
Requires-Dist: peft>=0.18.1; extra == 'full'
Requires-Dist: piper-tts>=1.2.0; extra == 'full'
Requires-Dist: prometheus-client>=0.21.0; extra == 'full'
Requires-Dist: pytest-asyncio>=1.1.0; extra == 'full'
Requires-Dist: pytest-timeout>=2.3.1; extra == 'full'
Requires-Dist: pytest-xdist>=3.0.0; extra == 'full'
Requires-Dist: pytest>=8.0.0; extra == 'full'
Requires-Dist: replicate>=1.0.4; extra == 'full'
Requires-Dist: resend>=2.19.0; extra == 'full'
Requires-Dist: runpod>=1.8.1; extra == 'full'
Requires-Dist: safetensors>=0.7.0; extra == 'full'
Requires-Dist: sendgrid>=6.12.5; extra == 'full'
Requires-Dist: sentence-transformers==3.0.1; extra == 'full'
Requires-Dist: soundfile>=0.13.0; extra == 'full'
Requires-Dist: spacy>=3.7.0; extra == 'full'
Requires-Dist: stripe>=10.0.0; extra == 'full'
Requires-Dist: tavily-python>=0.3.0; extra == 'full'
Requires-Dist: torch>=2.9.1; extra == 'full'
Requires-Dist: torchaudio>=2.9.1; extra == 'full'
Requires-Dist: torchvision>=0.24.1; extra == 'full'
Requires-Dist: transformers>=4.57.1; extra == 'full'
Requires-Dist: twilio>=9.9.1; extra == 'full'
Requires-Dist: vastai-sdk>=0.1.0; extra == 'full'
Requires-Dist: web3>=7.0.0; extra == 'full'
Provides-Extra: local
Requires-Dist: chromadb==1.0.15; extra == 'local'
Requires-Dist: langchain-community>=0.3.31; extra == 'local'
Requires-Dist: langchain>=0.3.31; extra == 'local'
Requires-Dist: numpy==2.3.1; extra == 'local'
Requires-Dist: ollama>=0.6.1; extra == 'local'
Requires-Dist: sentence-transformers==3.0.1; extra == 'local'
Requires-Dist: spacy>=3.7.0; extra == 'local'
Provides-Extra: ml
Requires-Dist: accelerate>=1.12.0; extra == 'ml'
Requires-Dist: diffusers>=0.36.0; extra == 'ml'
Requires-Dist: peft>=0.18.1; extra == 'ml'
Requires-Dist: safetensors>=0.7.0; extra == 'ml'
Requires-Dist: torch>=2.9.1; extra == 'ml'
Requires-Dist: torchaudio>=2.9.1; extra == 'ml'
Requires-Dist: torchvision>=0.24.1; extra == 'ml'
Requires-Dist: transformers>=4.57.1; extra == 'ml'
Provides-Extra: observability
Requires-Dist: opentelemetry-api>=1.20.0; extra == 'observability'
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc>=1.20.0; extra == 'observability'
Requires-Dist: opentelemetry-instrumentation-fastapi>=0.41b0; extra == 'observability'
Requires-Dist: opentelemetry-sdk>=1.20.0; extra == 'observability'
Requires-Dist: prometheus-client>=0.21.0; extra == 'observability'
Provides-Extra: search
Requires-Dist: tavily-python>=0.3.0; extra == 'search'
Provides-Extra: test
Requires-Dist: asgi-lifespan>=2.1.0; extra == 'test'
Requires-Dist: pytest-asyncio>=1.1.0; extra == 'test'
Requires-Dist: pytest-timeout>=2.3.1; extra == 'test'
Requires-Dist: pytest-xdist>=3.0.0; extra == 'test'
Requires-Dist: pytest>=8.0.0; extra == 'test'
Provides-Extra: visual
Requires-Dist: replicate>=1.0.4; extra == 'visual'
Provides-Extra: voice
Requires-Dist: deepgram-sdk>=4.0.0; extra == 'voice'
Requires-Dist: elevenlabs>=1.0.0; extra == 'voice'
Requires-Dist: faster-whisper>=1.1.0; extra == 'voice'
Requires-Dist: piper-tts>=1.2.0; extra == 'voice'
Requires-Dist: soundfile>=0.13.0; extra == 'voice'
Provides-Extra: voice-cloud
Requires-Dist: deepgram-sdk>=4.0.0; extra == 'voice-cloud'
Requires-Dist: elevenlabs>=1.0.0; extra == 'voice-cloud'
Provides-Extra: voice-local
Requires-Dist: faster-whisper>=1.1.0; extra == 'voice-local'
Requires-Dist: piper-tts>=1.2.0; extra == 'voice-local'
Requires-Dist: soundfile>=0.13.0; extra == 'voice-local'
Provides-Extra: wallet
Requires-Dist: stripe>=10.0.0; extra == 'wallet'
Requires-Dist: web3>=7.0.0; extra == 'wallet'
Description-Content-Type: text/markdown

# Kestrel: Sovereign AI Agent Framework

> Build AI agents that nobody can take away from their users — not you, not the cloud, not the next pivot.

Kestrel is a production-ready framework for creating autonomous AI agents with cryptographic identity, persistent memory, and constitutional governance. Every agent you deploy is **owned by its user**, governed by **immutable principles**, and able to **remember across every conversation**.

### Three Pillars

| Pillar | What it means |
|--------|--------------|
| **Portable DID identity** | Cryptographic identity the agent's user owns. Exportable, self-hostable, cloud-optional — the agent is not bound to any provider. |
| **Persistent memory you own** | SQLite-backed knowledge graph with full-text search and RAG. Conversations, documents, relationships — all searchable, portable, and encrypted at rest. |
| **Constitutional governance** | Every agent runs under an audited set of principles enforced *above* the LLM. Genesis audit on creation. Amendment requires cryptographic signature. |

### What's in core, what's an add-on

`pip install kestrel-sovereign` gives you a complete, working sovereign agent: identity, memory, constitution, privacy modes, multi-LLM support, voice (Piper TTS + FasterWhisper STT), local sandboxed compute, and a Cloud Run deployment path. Everything you need to run an agent locally with zero cloud commitment.

Cloud providers (RunPod, Vast.ai), specialized integrations (MCP, GitHub App, wallet), and proprietary training adapters are **installable add-ons** — separate Python packages that register themselves via entry points. This split is being completed across [#462](https://github.com/KestrelSovereignAI/kestrel-sovereign/issues/462) and [#560](https://github.com/KestrelSovereignAI/kestrel-sovereign/issues/560); current state is documented in [`KESTREL_FEATURES.md`](https://github.com/KestrelSovereignAI/kestrel-sovereign/blob/main/KESTREL_FEATURES.md).

## 🚀 Quick Start

### Prerequisites
- Python 3.11-3.13 (3.14 not yet supported due to tiktoken)
- [uv](https://docs.astral.sh/uv/) (for package management)
- [Ollama](https://ollama.ai) (optional - for local LLM inference without API keys)

### Install uv

If you don't have `uv` installed:

```bash
# macOS/Linux
curl -LsSf https://astral.sh/uv/install.sh | sh

# Windows (PowerShell)
powershell -ExecutionPolicy ByPass -c "irm https://astral.sh/uv/install.ps1 | iex"

# Or with pip
pip install uv
```

### Installation

```bash
# 1. Clone and setup
git clone https://github.com/KestrelSovereignAI/kestrel-sovereign.git
cd kestrel-sovereign
uv sync  # Creates .venv and installs all dependencies

# 2. (Optional) Start Ollama for local models - skip if using cloud APIs
ollama serve
ollama pull llama3.2:3b

# 3. Run the setup wizard (interactive: configures .env, kestrel.toml [llm], agent)
uv run kestrel setup
# Or hand-edit: cp kestrel.toml.example kestrel.toml

# 4. Doctor check (verify readiness)
uv run kestrel doctor

# 5. Create your agent
uv run kestrel create MyAgent

# 6. Start your agent
uv run kestrel start MyAgent
```

If you're upgrading from a pre-2026-05 setup that used a standalone `llm_config.toml`, run `uv run kestrel migrate-llm-config` to fold it into `kestrel.toml [llm]`. The legacy file is no longer read.

Your agent is now running at `http://localhost:8888`.

> **Port conflict?** Each agent has its own config. Edit `agent_data/myagent/kestrel.toml` to change the port, or use `--port 8899` on the command line.

> **Test it:** Visit `http://localhost:8888` in your browser to open the built-in **Sovereign Console** (web UI with Chat, Identity, Constitution, Memories, and more). Or check `http://localhost:8888/health` for a quick health check.

> **Windows users:** the CLI prints emoji. If you see `UnicodeEncodeError: 'charmap' codec can't encode character ...`, run `chcp 65001` once in your PowerShell session to switch the console to UTF-8. (As of v0.1.9 the CLI auto-reconfigures stdout, so a fresh install should not hit this.)

### CLI Commands (Cross-Platform)

All commands work on Windows, macOS, and Linux. Pass the agent directory as an argument:

```bash
uv run kestrel health                       # Check prerequisites
uv run kestrel create MyAgent               # Create a new agent
uv run kestrel start MyAgent                # Start an agent
uv run kestrel stop MyAgent                 # Stop an agent
uv run kestrel status                       # Show all running agents
uv run kestrel list                         # List available agents
uv run kestrel shell MyAgent                # CLI chat interface
uv run kestrel config ./agent_data/MyAgent  # Show agent config
```

### Feature management (`kestrel feature`)

Kestrel ships a lean core; everything else is a feature. Cloud providers, training adapters, voice cloud backends, and specialized integrations are installable packages that register themselves via Python entry points.

```bash
uv run kestrel feature list                   # Show installed + available features
uv run kestrel feature info <name>            # Detailed info about a feature
uv run kestrel feature install <name>         # Install a feature package
uv run kestrel feature enable <name>          # Enable an installed feature
uv run kestrel feature disable <name>         # Disable without uninstalling
uv run kestrel feature scaffold <name>        # Generate a new feature package skeleton
```

The canonical inventory of features lives in [`KESTREL_FEATURES.md`](https://github.com/KestrelSovereignAI/kestrel-sovereign/blob/main/KESTREL_FEATURES.md); the runtime registry is in [`kestrel_sovereign/data/feature_registry.toml`](https://github.com/KestrelSovereignAI/kestrel-sovereign/blob/main/kestrel_sovereign/data/feature_registry.toml).

### Per-Agent Configuration

Each agent can have a `kestrel.toml` config file in its directory:

```toml
# agent_data/myagent/kestrel.toml
[agent]
name = "MyAgent"
port = 8888
host = "0.0.0.0"
log_level = "INFO"
```

Create or edit config:
```bash
uv run kestrel config ./agent_data/myagent --init           # Create config
uv run kestrel config ./agent_data/myagent --set-port 8899  # Change port
uv run kestrel config ./agent_data/myagent --set-name MyAgent  # Change name
```

### Running Multiple Agents

Each agent runs on its own port. Create configs for each:

```bash
# Agent 1: Alpha on port 8888
uv run kestrel create Alpha --port 8888
uv run kestrel start Alpha

# Agent 2: Helper on port 8889
uv run kestrel create Helper --port 8889
uv run kestrel start Helper

# Check status of all agents
uv run kestrel status
```

### Alternative: Direct Commands

```bash
# Start server directly (set KESTREL_DB_PATH first)
KESTREL_DB_PATH=./agent_data/myagent uv run uvicorn server:app --port 8888

# CLI chat (no server needed)
uv run python main.py ./agent_data/myagent
```

> **Note:** `KESTREL_DB_PATH` is a **directory** path, not a file path. The database file `kestrel_prime.db` is created inside the specified directory. For example, setting `KESTREL_DB_PATH=./agent_data/myagent` stores the database at `./agent_data/myagent/kestrel_prime.db`.

<a id="web-ui-sovereign-console"></a>
## 🖥️ Web UI (Sovereign Console)

Kestrel includes a built-in web interface called the **Sovereign Console**. Once your agent is running, open `http://localhost:8888` in any browser -- no additional software required.

The console provides 8 tabs:

| Tab | Description |
|-----|-------------|
| **Identity** | View the agent's DID, name, and cryptographic identity |
| **Chat** | Converse with the agent (supports model selection, privacy modes, chat history) |
| **Constitution** | View and audit the agent's constitutional principles |
| **Memories** | Browse the agent's knowledge graph and stored memories |
| **Tasks** | Monitor background tasks and activity |
| **Sovereignty** | Manage data sovereignty, backups, and exports |
| **Resources** | View agent resource usage and configuration |
| **Security** | Manage permissions, audit logs, and session security |

> **Alternative clients:** The server also exposes an OpenAI-compatible API at `/v1/chat/completions`, so you can connect any OpenAI-compatible client (e.g., [Open WebUI](https://github.com/open-webui/open-webui)) if you prefer.

## 🏗️ Architecture Overview

Kestrel agents are built on several key components:

- **Cryptographic Identity**: Each agent has a unique DID (Decentralized Identifier)
- **Enhanced Storage**: SQLite-based memory with FTS, knowledge graphs, and RAG
- **Multi-Model LLM**: Fallback between local (Ollama) and cloud (OpenAI) models
- **Constitutional Governance**: Immutable principles with interpretive flexibility
- **Blockchain Anchoring**: Optional integrity verification via blockchain

## 📁 Project Structure

```
kestrel-sovereign/
├── kestrel_sovereign/         # Core sovereign package
│   ├── cli.py                 # `kestrel` CLI entry point (canonical)
│   ├── kestrel_agent.py       # Core agent class
│   ├── inception_service.py   # Agent creation (DID + genesis audit)
│   ├── agent_config.py        # Per-agent config loader
│   ├── data/feature_registry.toml  # Runtime feature registry
│   └── ...
├── server.py                  # FastAPI agent server
├── host.py                    # Multi-agent multi_agent host
├── main.py                    # Direct interactive REPL
├── kestrel_sdk/               # Public SDK for feature authors
├── packages/                  # Extracted feature packages
├── features/                  # Built-in features
├── docs/                      # Architecture & guides
└── tests/                     # Test suite
```

## 🎯 Core Features

### 1. Sovereign Memory
- **Persistent Storage**: SQLite with full-text search and knowledge graphs
- **RAG Pipeline**: Document chunking, embedding, and semantic retrieval
- **Conversation History**: Complete interaction tracking with metadata
- **Human-Led Interactions**: Prioritizes user narratives (e.g., storytelling) for preservation and no-loss continuity.

### 2. Multi-Model Intelligence
- **Local First**: Ollama for privacy and cost efficiency
- **Cloud Fallback**: OpenAI for complex reasoning when needed
- **Configurable**: Easy provider switching via configuration

### 3. Cryptographic Identity
- **DID Generation**: Unique decentralized identifiers
- **Signed Operations**: Cryptographic verification of agent actions
- **Ownership Transfer**: Secure agent handoff between users

### 4. Constitutional Governance
- **Immutable Articles**: Core principles that cannot be changed
- **Interpretive Canons**: Flexible guidelines for decision-making
- **Amendment Process**: Cryptographically-signed governance updates

### 5. Data Sovereignty & Privacy Modes
- **Ephemeral Mode**: True off-the-record conversations (nothing stored)
- **Privacy Granularity**: 5 distinct privacy levels for different use cases
- **Decentralized Storage**: Filecoin/IPFS integration for vendor independence
- **Agent Economics**: Autonomous economic contracts using cryptographic payments

## ⚠️ Feature Stability (v0.1.8 Beta)

Kestrel covers a wide surface; not all of it ships at the same maturity. **Verified 2026-04-25** by reading code, tests, skip markers, and recent git activity:

### ✅ Stable — production-ready

- **Constitutional AI** — Genesis audits, hierarchical permissions, approval queues
- **DID-based Identity** — `did:pkh` format, portable agent identity, export/import
- **5-Level Privacy Modes** — EPHEMERAL → ISOLATED → ANONYMOUS → NORMAL → PUBLIC
- **Memory & Storage** — SQLite/PostgreSQL with FTS, knowledge graph, RAG pipeline; storage parity contracts in CI
- **LLM service** — Vendor/route/model architecture with Anthropic, OpenAI, Vertex AI, Ollama, OpenRouter, xAI, Groq; retry, structured output, streaming, vision
- **Voice (local)** — Piper TTS + FasterWhisper STT
- **Agent Economics** — Multi-currency wallets (FIL, USDC, USDT, ETH)
- **A2A Protocol** — JSON-RPC 2.0 for agent-to-agent communication
- **Cloud Run deploy** — 90 tests, active maintenance; the most-tested cloud feature

### 🧪 Experimental — works on the happy path; gaps to know about

- **RunPod GPU orchestration** — start/stop/status work; managed-mode log retrieval is `NotImplementedError`; image generation (`!dream`) is dead code; integration tests skip in CI without `RUNPOD_API_KEY`. No active development since early April 2026.
- **Vast.ai GPU marketplace** — broader test coverage than RunPod, but recent extraction/revert churn; integration tests skip without `VASTAI_API_KEY`.
- **GCP Compute GPU VMs** — similar maturity to Vast.ai; integration tests skip without `GCP_PROJECT_ID`.
- **Azure Container Apps deploy** — provider stub; not the recommended deploy target.
- **GitHub code introspection** — file reading, code search, definition lookup, issue tools all work (48 unit tests). The deeper static-analysis surface promised in [`docs/architecture/GITHUB_FEATURE_DESIGN.md`](https://github.com/KestrelSovereignAI/kestrel-sovereign/blob/main/docs/architecture/GITHUB_FEATURE_DESIGN.md) (call graphs, inheritance trees, dependency analysis) is not implemented.
- **Training (LoRA pipeline)** — core ships the protocol + factory; the local-MPS adapter is actively maintained. Cloud-training adapters (RunPod/Vertex/Replicate) work but skip CI without API keys; production-grade adapters are being moved to private packages.

### ⚠️ Work-in-progress

- **DID Verification Layer** — generation works; verification is incomplete
- **E2E Test Stability** — some integration tests are occasionally flaky
- **API Stability** — APIs may change before v1.0; breaking changes will be documented

### ❌ Not implemented in this framework

These are not on the kestrel-sovereign roadmap; if you need them, OpenClaw or a different tool is the better fit.

- **Multi-Channel Messaging** — WhatsApp, Telegram, Discord, Slack integration
- **Voice cloud backends** — beyond local Piper / FasterWhisper (e.g. ElevenLabs, Deepgram)
- **Browser Automation** — Chrome/Chromium control
- **Visual Workspaces** — A2UI canvas, live reload

**Bottom line:** Kestrel is ready for developers building privacy-first, economically-independent AI agents and for the soft-launch preview cohort. Not yet ready for unmanaged production apps or general consumer use. If you find a stability classification above doesn't match your experience, please open an issue — that's the kind of signal we need.

## 📚 Documentation

Detailed documentation is available in the `docs/` directory:

- [Documentation Index](https://github.com/KestrelSovereignAI/kestrel-sovereign/blob/main/docs/README.md)

- [Agent Ecosystem](https://github.com/KestrelSovereignAI/kestrel-sovereign/blob/main/docs/architecture/core/AGENT_ECOSYSTEM.md)
- [Agent Economics](https://github.com/KestrelSovereignAI/kestrel-sovereign/blob/main/docs/architecture/economics/AGENT_ECONOMICS.md)
- [Kestrel Constitution](https://github.com/KestrelSovereignAI/kestrel-sovereign/blob/main/docs/principles/KESTREL_CONSTITUTION.md)
- [Cryptographic Anchoring](https://github.com/KestrelSovereignAI/kestrel-sovereign/blob/main/docs/architecture/security/CRYPTOGRAPHIC_ANCHORING.md)
- [Decentralized Storage](https://github.com/KestrelSovereignAI/kestrel-sovereign/blob/main/docs/architecture/storage/DECENTRALIZED_STORAGE.md)
- [Multi-Model Support](https://github.com/KestrelSovereignAI/kestrel-sovereign/blob/main/docs/architecture/core/MULTI_MODEL_SUPPORT.md)
- [Privacy Modes](https://github.com/KestrelSovereignAI/kestrel-sovereign/blob/main/docs/architecture/security/PRIVACY_MODES.md)
- [LLM Service Architecture](https://github.com/KestrelSovereignAI/kestrel-sovereign/blob/main/docs/architecture/LLM_SERVICE_ARCHITECTURE.md)

## 💡 Example Applications

Kestrel is a foundation for AI agents that need to outlive any single vendor, deployment, or owner. Concrete deployments and good-fit use cases:

- **Healthcare RPM agents** — Constitutional governance over an LLM, persistent patient-owned memory, audit trail for every clinically-relevant action.
- **Long-running personal research agents** — Memory accumulates across months without dependency on a single provider's chat history.
- **Custodial agents for sensitive document workflows** — Privacy-mode tiers (EPHEMERAL → PUBLIC) let one agent handle both an off-the-record consult and a fully-anchored long-term contract.
- **Multi-agent A2A networks** — JSON-RPC 2.0 agent-to-agent protocol lets sovereign agents collaborate without surrendering their identity to a central broker.

## 🧪 Testing

Run the test suite from the activated virtual environment:
```bash
# Run a single test with uv and -x
uv run pytest -x tests/test_inception.py::test_successful_inception
```

### Clean Install Verification

Kestrel supports multiple installation configurations. Use the verification script to test that clean installs work correctly across all supported scenarios:

```bash
# Run all 5 install scenarios (creates isolated venvs)
./scripts/verify_clean_install.sh

# Run specific tests only
./scripts/verify_clean_install.sh 1 3    # SDK-only and wallet package
```

The install matrix covers:

| Test | Scenario | Verifies |
|------|----------|----------|
| 1 | **SDK only** | `from kestrel_sdk.features.base import Feature` |
| 2 | **Core sovereign** | `from kestrel_sovereign.features.base import Feature` + `/health` |
| 3 | **Feature package** | `from kestrel_feature_wallet import WalletFeature` |
| 4 | **SDK + feature dev mode** | Feature packages can develop against SDK alone |
| 5 | **Full stack** | Sovereign + wallet + intelligence, entry_point discovery |

Integration tests for the same import paths run as part of the normal test suite:

```bash
uv run pytest tests/integration/test_clean_install_verification.py -v
```

## 🔧 Configuration

### LLM Configuration (`kestrel.toml` `[llm]`)

LLM config lives under the `[llm]` section of `kestrel.toml`. The setup wizard (`kestrel setup llm`) will write it for you; you can also hand-edit `kestrel.toml` after copying from `kestrel.toml.example`.

Kestrel uses a **vendor/route/model** schema. A *vendor* is who makes the weights; a *route* is how to reach them (adapter + base URL + auth). API keys belong in `.env` and are referenced by `api_key_env`. See [`kestrel.toml.example`](https://github.com/KestrelSovereignAI/kestrel-sovereign/blob/main/kestrel.toml.example) and [`docs/architecture/LLM_SERVICE_ARCHITECTURE.md`](https://github.com/KestrelSovereignAI/kestrel-sovereign/blob/main/docs/architecture/LLM_SERVICE_ARCHITECTURE.md) for the canonical spec.

```toml
[llm]
route_priority = ["openai:api", "ollama:local"]

[llm.vendors.openai]
is_cloud = true

[llm.vendors.openai.routes.api]
adapter        = "OpenAIAdapter"
api_key_env    = "OPENAI_API_KEY"
model          = "auto"
selection_hints = ["gpt-5", "mini"]

[llm.vendors.ollama]
is_cloud = false

[llm.vendors.ollama.routes.local]
adapter        = "OllamaAdapter"
host           = "http://localhost:11434"
model          = "auto"
selection_hints = ["llama3.2", "qwen"]
```

> Pre-2026-05 setups used a standalone `llm_config.toml` at the repo root. That path was removed (epic #938). Run `kestrel migrate-llm-config` to fold a legacy file into `kestrel.toml [llm]`; the source is renamed to `.bak`, your prior `kestrel.toml` is timestamp-backed-up, and the operation is idempotent.

### Environment Variables

See `.env.example` for a complete list. Key variables:

**LLM Providers:**
- `OPENROUTER_API_KEY`: OpenRouter API key (recommended - access to multiple providers)
- `OPENAI_API_KEY`: OpenAI API key for cloud models
- `ANTHROPIC_API_KEY`: Anthropic API key for Claude models

**Storage:**
- `KESTREL_DB_PATH`: Directory where the agent database is stored (default: `./agent_data`). This is a **directory** path -- the database file `kestrel_prime.db` is created inside it.
- `KESTREL_DATA_KEY`: Fernet encryption key for data at rest

**GitHub Integration:**
- `GITHUB_TOKEN`: Personal access token for GitHub features
- `GITHUB_SELF_REPO`: Agent's source repository (default: `KestrelSovereignAI/kestrel-sovereign`)

## 🚢 Deployment

Kestrel supports multiple deployment targets. See [KESTREL_FEATURES.md](https://github.com/KestrelSovereignAI/kestrel-sovereign/blob/main/KESTREL_FEATURES.md#11-deployment) for the full catalog.

### Cloud Run (Serverless)

Scales to zero when idle ($0/month), auto-scales under load. Each sovereign agent gets its own service.

```bash
# One-time: set up GCP secrets from .env
scripts/cloudrun/setup_secrets.sh

# Build and push to GCR
scripts/cloudrun/build.sh

# Deploy to dev (scales to zero) or prod (always warm)
scripts/cloudrun/deploy_dev.sh
scripts/cloudrun/deploy_prod.sh
```

Auto-deploys on version tags via [GitHub Actions](https://github.com/KestrelSovereignAI/kestrel-sovereign/blob/main/.github/workflows/deploy.yml).

### Docker (Local)

```bash
# Remote LLM — smallest image (~500MB)
docker build -f docker/Dockerfile.remote -t kestrel .
docker run -p 8888:8888 -e OPENAI_API_KEY=... kestrel

# Standalone with Ollama (no API keys needed)
docker build -f docker/Dockerfile.standalone -t kestrel-standalone .
docker run -p 8888:8888 kestrel-standalone

# GPU with CUDA
docker build -f docker/Dockerfile.gpu -t kestrel-gpu .
docker run --gpus all -p 8888:8888 kestrel-gpu
```

## 🔐 Backups and Storage Tiers

Backups can be created interactively from the agent using privacy-gated storage tiers:

- local: cache the backup tar.gz locally only
- ipfs: encrypt + gzip and store on IPFS; also cache locally
- filecoin: same as IPFS and propose a Filecoin deal via Lotus when available; fallback to local if not

Privacy gating:

- EPHEMERAL: backups disabled
- ISOLATED: cache-only; use `!promote-backup` to save the isolated session and back up
- ANONYMOUS: backups allowed; encryption forced for filecoin tier
- NORMAL: backups allowed; encryption configurable (default on)

Usage from the REPL:

```text
!backup tier=local
!backup tier=ipfs
!backup tier=filecoin
!promote-backup tier=filecoin
```

Each backup produces a `backup_artifact` node in the graph linked to the agent with properties like `content_hash`, `ipfs_cid`, `filecoin_deal_id`, `encrypted`, and timestamp.

## 🔒 Encryption at Rest

- Files and conversation history can be encrypted at rest by setting `KESTREL_DATA_KEY` (Fernet key or passphrase):

```bash
export KESTREL_DATA_KEY=$(python - <<'PY'
from cryptography.fernet import Fernet
print(Fernet.generate_key().decode())
PY
)
```

- With the key set, stored file blobs and conversation entries are encrypted transparently. Backups remain encrypted by default. For production, wire the backup master key to an env/KMS and avoid the dev placeholder.

### Optional: Full-DB Encryption (SQLCipher)

- If you install `pysqlcipher3` and set `KESTREL_DB_KEY`, the SQLite connection will use SQLCipher and encrypt the entire DB:

```bash
export KESTREL_DB_KEY="your-db-passphrase"
uv run python server.py
```

- Without `pysqlcipher3`, the system falls back to normal SQLite. File blobs and conversations still encrypt with `KESTREL_DATA_KEY` if set.

## 🧩 OpenAI-Compatible API

The server exposes OpenAI-compatible endpoints for use with third-party clients:

- `GET /v1/models`
- `POST /v1/chat/completions`

For most users, the built-in **Sovereign Console** at `http://localhost:8888` is the easiest way to interact with your agent (see the [Web UI section](#web-ui-sovereign-console) above). If you prefer an external client, point any OpenAI-compatible tool (e.g., [Open WebUI](https://github.com/open-webui/open-webui)) at your server's `/v1/chat/completions` endpoint. Use the model name from `/v1/models`.

## 🤝 Contributing

1. Fork the repository
2. Create a feature branch
3. Add tests for new functionality
4. Run the test suite: `python -m pytest -x`
5. Submit a pull request

## 📄 License

Apache 2.0 — see [LICENSE](https://github.com/KestrelSovereignAI/kestrel-sovereign/blob/main/LICENSE) for details.

## 🆘 Support

- **Issues**: GitHub Issues for bug reports and feature requests
- **Discussions**: GitHub Discussions for questions and ideas
- **Documentation**: See `features/` directory for detailed guides

---

*Kestrel: Where AI meets sovereignty.* 

## 📚 Key Files Reference

| File | Purpose |
|------|---------|
| `kestrel_sovereign/cli.py` | Canonical `kestrel` CLI entry point |
| `server.py` | FastAPI agent server |
| `host.py` | Multi-agent multi_agent host (Cloud Run) |
| `main.py` | Direct interactive REPL |
| `kestrel.toml` | Unified config (LLM, agents, features). `[llm]` holds provider config. |
| `KESTREL_FEATURES.md` | Canonical feature inventory |
| `kestrel_sovereign/kestrel_agent.py` | Core agent logic |
| `kestrel_sovereign/agent_config.py` | Per-agent config loader |
| `kestrel_sovereign/inception_service.py` | New agent creation (DID + genesis audit) |
| `kestrel_sovereign/data/feature_registry.toml` | Runtime feature registry |
| `agent_data/<name>/kestrel.toml` | Per-agent configuration |
| `agent_data/<name>/kestrel_prime.db` | Agent database |
| `docs/**/*.md` | Detailed documentation |

## Architecture

### Storage System

The Kestrel storage system is designed to be modular and extensible. It is composed of several specialized components, orchestrated by a high-level facade.

*   **`storage.Database`**: Manages the low-level SQLite connection and schema.
*   **`storage.FileStore`**: Handles the storage and retrieval of files.
*   **`storage.GraphStore`**: Manages the knowledge graph (nodes and edges).
*   **`storage.RAGStore`**: Responsible for document chunking and semantic search for the RAG pipeline and "case law" system.
*   **`storage.ConversationStore`**: Manages the agent's conversation history.

The main `Storage` class in `storage/__init__.py` acts as a facade, providing a single, unified interface to these components.

### Genesis Self-Audit

To ensure the integrity of all new agents, Kestrel implements a "genesis self-audit." When a new agent is created via `inception_service.py`:

1.  The agent's foundational files (keys, database) are created.
2.  The `KESTREL_CONSTITUTION.md` is stored as the agent's first memory.
3.  The agent is instantiated and its very first action is to perform an integrity audit on its own constitution.
4.  If the audit returns a high risk level, the creation process is aborted, and all generated files are cleaned up, preventing the existence of a non-compliant agent.

This process guarantees that every agent in the ecosystem starts from a foundation of verifiable integrity.

## 🔄 Next Steps

After getting started:

1.  **Explore Features**: Read `features/` documentation 
