Metadata-Version: 2.4
Name: aibrain
Version: 1.5.37
Summary: AI agent brain with memory, teams, flows, document ingestion, and MCP — your agent, but better every day
Author: Matthew McKee
Author-email: Matthew McKee <decker.ops@gmail.com>
License: Proprietary
Project-URL: Homepage, https://myaibrain.org
Project-URL: Repository, https://github.com/sindecker/aibrain
Project-URL: Issues, https://myaibrain.org/support
Project-URL: Documentation, https://myaibrain.org/docs
Keywords: ai,agent,memory,brain,mcp,skills,retrieval,llm,teams,flows,ingestion,orchestration
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Application Frameworks
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Framework :: AsyncIO
Requires-Python: <3.15,>=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: requests>=2.31.0
Requires-Dist: python-dateutil>=2.9.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: cryptography>=41.0.0
Requires-Dist: fastapi>=0.111.0
Requires-Dist: uvicorn[standard]>=0.29.0
Requires-Dist: apscheduler<4.0,>=3.10.0
Requires-Dist: python-multipart>=0.0.9
Requires-Dist: bcrypt>=4.0.0
Requires-Dist: websockets>=12.0
Provides-Extra: api
Requires-Dist: redis>=5.0.0; extra == "api"
Provides-Extra: embeddings
Requires-Dist: sentence-transformers>=3.0.0; extra == "embeddings"
Requires-Dist: sqlite-vec>=0.1.0; extra == "embeddings"
Provides-Extra: mcp
Requires-Dist: mcp>=0.9.0; extra == "mcp"
Provides-Extra: workflows
Requires-Dist: beautifulsoup4>=4.12.0; extra == "workflows"
Requires-Dist: feedparser>=6.0.0; extra == "workflows"
Requires-Dist: icalendar>=5.0.0; extra == "workflows"
Provides-Extra: boss
Requires-Dist: redis>=5.0.0; extra == "boss"
Requires-Dist: docker>=7.0.0; extra == "boss"
Provides-Extra: browser
Requires-Dist: browser-use>=0.12.0; extra == "browser"
Provides-Extra: docs
Requires-Dist: docling>=2.8.0; extra == "docs"
Provides-Extra: finance
Requires-Dist: yfinance>=0.2.30; extra == "finance"
Provides-Extra: payments
Requires-Dist: stripe>=8.0.0; extra == "payments"
Provides-Extra: ingest
Requires-Dist: pdfplumber>=0.10.0; extra == "ingest"
Requires-Dist: openpyxl>=3.1.0; extra == "ingest"
Provides-Extra: pdf
Requires-Dist: pypdf>=4.0.0; extra == "pdf"
Provides-Extra: tts
Requires-Dist: kokoro-onnx>=0.4.0; extra == "tts"
Requires-Dist: sounddevice>=0.4.6; extra == "tts"
Provides-Extra: supabase
Requires-Dist: supabase>=2.0.0; extra == "supabase"
Provides-Extra: export
Requires-Dist: markdown>=3.5.0; extra == "export"
Requires-Dist: xhtml2pdf>=0.2.17; extra == "export"
Provides-Extra: export-advanced
Requires-Dist: aibrain[export]; extra == "export-advanced"
Requires-Dist: weasyprint>=61.0; extra == "export-advanced"
Provides-Extra: dev
Requires-Dist: pytest>=8.0; extra == "dev"
Requires-Dist: httpx>=0.27.0; extra == "dev"
Requires-Dist: ruff>=0.4.0; extra == "dev"
Provides-Extra: all
Requires-Dist: aibrain[api,boss,browser,docs,embeddings,export,finance,ingest,mcp,payments,pdf,supabase,tts,workflows]; extra == "all"
Dynamic: author
Dynamic: license-file

# AIBrain — Your AI agent that remembers, learns, and acts

> **80.0% recall on LongMemEval M with a 109M model. 99.8% on MSDialog. Zero-parameter FTS5 achieves 96.9% NDCG@5 on dialogue retrieval. All on a consumer laptop, no GPU required.** One install. 80 workflows. Agent teams. Flow engine. Document ingestion. Universal MCP. Dual-system memory that compounds across sessions. Runs locally, no cloud lock-in.

AIBrain is a self-hosted operating system for AI agents. It gives any agent persistent memory, typed Agent/Task/Team composition, a decorator-driven Flow engine, document ingestion, universal MCP client connectivity, a reactive workflow engine, a Complementary Learning Systems (CLS) cognitive substrate with a weekly consolidation cycle, multi-model LLM routing, an approval queue, inter-agent messaging, and 80 ready-to-run workflows — all behind a 42-page Next.js dashboard. Deploy it on a laptop, a VPS, or in Docker; your agent carries its entire brain with it.

![AIBrain](https://img.shields.io/badge/AIBrain-1.5.36-00F5D4?style=flat-square) ![License](https://img.shields.io/badge/license-Proprietary-blue?style=flat-square) ![Python](https://img.shields.io/badge/python-3.10+-blue?style=flat-square) ![Tests](https://img.shields.io/badge/tests-2373-00F5D4?style=flat-square) ![Workflows](https://img.shields.io/badge/workflows-80-00F5D4?style=flat-square) ![Dashboard](https://img.shields.io/badge/dashboard-42_pages-61DAFB?style=flat-square)

---

## Why AIBrain?

Most AI memory systems are toys. They store everything, retrieve nothing useful, and require expensive GPUs to run. AIBrain is different:

- **Verified retrieval performance.** On LongMemEval M (500 instances, the standard benchmark for long-term conversational memory), AIBrain's SelRoute system achieves Ra@5 = 0.800 with a 109M bge-base model — beating the strongest published baseline (Contriever + LLM fact keys, 0.762) by +0.038 on recall and +0.180 on NDCG@5. A 22MB MiniLM model achieves Ra@5 = 0.785, statistically equivalent to models 50% larger. The zero-parameter FTS5 baseline (zero trainable parameters, zero GPU) achieves NDCG@5 = 0.692 on LongMemEval M, exceeding every published system including 1.5B-parameter models.
- **Near-perfect on domain-specific retrieval.** On MSDialog (2,199 tech-support dialogues), AIBrain achieves Ra@5 = 0.998 with a 22MB MiniLM model — near-perfect retrieval for technical support contexts.
- **Zero-parameter dialogue retrieval.** On LMEB dialogue (840 instances), the FTS5 zero-ML retriever achieves NDCG@5 = 0.971 — no neural parameters, no GPU, no training data.
- **Total evaluation instances: 62,792+.** Every number is from verified JSON files in the benchmarks/ directory. The methodology is described in the peer-reviewed SelRoute paper (McKee, 2026, arXiv:2604.02431).
- **All benchmarks run on a consumer laptop.** No GPU required. No cloud credits. No special hardware.

The secret is the CLS architecture: a Complementary Learning Systems dual-system memory inspired by the mammalian brain. Every session writes to fast hippocampal memory. A weekly `aibrain dream` consolidation cycle slow-extracts patterns and upgrades routing weights. The brain gets measurably better at subsequent tasks — not just stores more.

---

## What's New in v1.5.36

- **Claude Code 2.1.123 compatibility** — MCP server no longer crashes on startup. Claude Code 2.1.123 changed MCP server launch to replace (not merge) the subprocess environment. `aibrain_db.py` `_resolve_root()` now guards `Path.home()` with `try/except RuntimeError`, falling back to `USERPROFILE`/`HOME` env vars with a clear error message if both are absent. Setup wizard now writes `USERPROFILE`, `HOME`, `SystemRoot`, and `PATH` into the generated MCP `env` dict so new installs work correctly out of the box.

**Previously in v1.5.35:** Budget schema migrations (`budget_policies` + `budget_incidents` tables), event bus drain (`futures.wait` timeout fix), 23 budget tests + 8 event bus tests.

**Previously in v1.5.18:** Honest cost tracking, Dream consolidation (CLS REM phase), rubric signal fixes, metrics sentinel handling, CLI help expanded to 28 new commands, dashboard empty-state guidance.

**Previously in v1.5.17:** `litellm` added to core dependencies, `AIBRAIN_ENV` default changed to `development`, dashboard directory clarified, dashboard setup hint in CLI.

**Previously in v1.5.16:** Security hardening — IPv4-mapped IPv6 SSRF bypass fixed, `startswith()` path-traversal boundary tightened, KG foreign-key constraint set to `ON DELETE SET NULL`, import `db_path` no-op fixed. MCP server lazy-load fix.

**Previously in v1.5.14:** Temporal knowledge graph, local-first conversation history import (`aibrain import`), Cursor plugin, n8n community node, Supabase backend, Boss Agent SQLite persistence, Bolt.diy and Base44 starter templates.

**Previously in v1.5.12:** 16 framework adapters (LangChain, CrewAI, AutoGen, Haystack, and 12 more), Windows NSIS installer, auto-updater + backend watchdog, full dark/light mode WCAG 2.1 AA, starter memories, OAuth PKCE flow, Goals slide-over, memory lifecycle hooks.

**Introduced in v1.5.0:** Graph memory, vault citations, data classification — SQLite relations table with BFS path-finding, memory_id citations on every recall, SECRET/SENSITIVE routing to local Ollama.

---

## Install

Pick the path that matches your environment. All paths install the same package from PyPI.

**One-line installer (macOS / Linux / WSL)**
```bash
curl -sSL https://myaibrain.org/install | sh
```
Creates an isolated venv at `~/.aibrain/venv`, pip-installs `aibrain`, and symlinks the CLI into `/usr/local/bin` (or `~/.local/bin` fallback). Re-run any time to upgrade. Python 3.10+ required.

**One-line installer (Windows PowerShell)**
```powershell
irm https://myaibrain.org/install.ps1 | iex
```
Creates an isolated venv at `%USERPROFILE%\.aibrain\venv`, pip-installs `aibrain`, and adds the venv Scripts dir to your user PATH. Python 3.10+ required.

**Homebrew (macOS / Linux)**
```bash
brew tap sindecker/tap
brew install aibrain
```
Installs into a Homebrew-managed venv and symlinks the CLI.

**pip (any platform)**
```bash
pip install aibrain
```

**Docker**
```bash
docker pull sindecker/aibrain:latest
```

---

## Quick Start

```bash
# Install
pip install aibrain

# Initialize your brain
aibrain init

# Start the server
aibrain serve

# Open the dashboard
open http://localhost:3000
```

Your agent now has persistent memory. Every conversation, every workflow, every decision is stored and retrievable. Run `aibrain dream` weekly to consolidate patterns and improve retrieval.

---

## Benchmark Results

AIBrain's SelRoute retrieval system has been evaluated on 62,792+ instances across multiple benchmarks. All results are from verified JSON files in the benchmarks/ directory.

### LongMemEval M (500 instances)

| System | Parameters | Ra@5 | NDCG@5 |
|--------|-----------|------|--------|
| SelRoute bge-base (metadata routing) | 109M | **0.800** | **0.812** |
| SelRoute bge-small (metadata routing) | 33M | 0.786 | 0.718 |
| SelRoute FTS5 (zero-ML, zero-GPU) | 0 | 0.745 | 0.692 |
| all-MiniLM-L6-v2 | 22M | 0.785 | 0.717 |

### LongMemEval S (500 instances)

| System | Parameters | Ra@5 |
|--------|-----------|------|
| SelRoute bge-base | 109M | **0.920** |
| SelRoute Oracle | — | 0.992 |

### MSDialog (2,199 tech-support dialogues)

| System | Parameters | Ra@5 |
|--------|-----------|------|
| SelRoute MiniLM | 22M | **0.998** |

### LoCoMo (1,986 QA pairs)

| System | Parameters | Recall@5 | Ra@5 |
|--------|-----------|-----------|------|
| SelRoute FTS5 (zero-ML) | 0 | **0.859** | **0.767** |

### QReCC (52,678 conversational queries)

| System | Parameters | MRR |
|--------|-----------|-----|
| SelRoute FTS5+reasoning | 0 | **51.66** |

### LMEB dialogue (840 instances)

| System | Parameters | NDCG@5 |
|--------|-----------|--------|
| SelRoute FTS5 (zero-ML) | 0 | **0.971** |

**Key findings:**
1. A 22MB MiniLM model achieves Ra@5 = 0.785 on LongMemEval M — competitive retrieval with a model that fits in RAM on any device.
2. A zero-parameter FTS5 retriever achieves NDCG@5 = 0.971 on LMEB dialogue — no neural parameters, no GPU, no training data.
3. All benchmarks run on a consumer laptop. No GPU required.

---

## Architecture

### Complementary Learning Systems (CLS)

AIBrain implements a dual-system memory architecture inspired by the mammalian brain:

- **Hippocampal fast encoding.** Every session writes immediately to short-term memory. No indexing delay, no batch processing. Your agent remembers what just happened.
- **Neocortical consolidation.** A weekly `aibrain dream` cycle slow-extracts patterns from accumulated sessions, upgrades routing weights, and consolidates long-term knowledge. The brain gets measurably better at subsequent tasks.
- **SelRoute routing.** The SelRoute system (arXiv:2604.02431) routes each query to the optimal retrieval strategy — dense embedding, sparse FTS5, or hybrid — based on query characteristics. This is what enables a 22MB model to match 1.5B-parameter systems.

### Boss Agent

Multi-agent orchestration with one orchestrator and multiple isolated workers sharing a single brain. Each worker has its own context, memory, and tool access, but all share the same persistent knowledge base.

### Companies / RBAC

Full organizational hierarchy — agents, tasks, roles, and approval flows. Manage team access, delegate tasks, and enforce governance policies.

### Brain Marketplace

Share or sell trained brains via git. Export your brain, push it to a repository, and let others import it. Brains carry learned patterns, routing weights, and consolidated knowledge.

### Satellite DBs

Federated search across multiple brain instances. Query one brain and get results from all connected brains.

---

## Pricing

| Tier | Price | Features |
|------|-------|----------|
| **Free** | $0 | Unlimited local usage. All features. No cloud dependency. |
| **Pro** | $9.95/mo | Priority support, early access to new features, cloud sync. |
| **Team** | $29.95/mo | Everything in Pro, plus RBAC, audit logs, dedicated support. |

All tiers include the same core AIBrain software. The difference is support level and cloud features.

---

## CLI Entrypoints

- `aibrain` — Main CLI
- `aibrain-server` — Start the backend server
- `aibrain-mcp` — MCP server
- `aibrain-compress` — SelRoute compression library (50-99% token savings on git/build/test output)
- `aibrain-settings` — Configure AIBrain
- `aibrain-demo` — Run a demo

---

## License

Proprietary. See LICENSE file for details.

---

## Contributing

See CONTRIBUTING.md for development setup and contribution guidelines.
