Skip to content
🧠

AIBrain

Starting up...

AIBrain
FREE

Did You Know

Facts about how AIBrain thinks

Benchmarks

SelRoute beats 1.5B-parameter models with only 22M

Ra@5=0.789 on LongMemEval_M, outperforming Stella V5 1.5B (0.720) and Meta Contriever (0.723). Routing queries to the right retrieval strategy matters more than raw parameter count. SelRoute achieves this with MiniLM-L6-v2 — a 22MB model that runs on any CPU.

Benchmarks

Zero-ML beats ML on some queries

FTS5 — pure keyword search with zero parameters and zero ML — outperforms every embedding model on the LoCoMo conversational memory benchmark. SelRoute detects these query types and routes to FTS5 automatically. Routing to zero-ML was not the plan; it was the discovery.

Benchmarks

Near-perfect recall under ideal conditions

On the LongMemEval Oracle split (optimal routing), SelRoute achieves Ra@5=0.992. This is the upper bound that validates the architecture: when the routing decision is correct, recall is essentially perfect. The challenge — and what SelRoute solves — is making that decision correctly on every real query.

Architecture

Built on how biological brains actually work

AIBrain is grounded in Complementary Learning Systems (CLS) theory — the neuroscience of how biological brains use two memory systems. A fast hippocampal system captures new events quickly. A slow neocortical system builds stable patterns through repetition. AIBrain implements both: episodic memory for new events, semantic memory for consolidated knowledge.

Architecture

The nightly dream cycle

Run `aibrain dream` to trigger a consolidation pass that mirrors biological sleep. Recent episodic memories are replayed, stable patterns are extracted, and durable knowledge moves to long-term semantic memory. Without consolidation, knowledge accumulates but doesn't compound. The dream cycle is how the brain gets smarter over time.

Architecture

Memory changes every time you access it

"You can't step in the same river twice." Every retrieval in AIBrain slightly reshapes the retrieved memory — updating its recency, weight, and connections. This mirrors how human memory actually works: each recall reconstructs and modifies the trace. Memory is a living process, not a static lookup table.

Architecture

Three ways to forget — all intentional

AIBrain forgets via temporal decay (old unused memories fade), importance weighting (low-signal memories fade faster), and conflict resolution (contradicted facts get demoted). Forgetting is not a bug — it's how the brain stays coherent as new information arrives and old information becomes stale.

Architecture

Knowledge as a graph, not a flat list

When AIBrain stores a fact, it also extracts how that fact connects to related concepts — entities, relationships, and semantic links. Retrieval can traverse these connections, surfacing related context you didn't explicitly query for. The connectome metaphor: knowledge modelled as a web, not a spreadsheet.

Infrastructure

22MB embedding model, zero GPU required

The default MiniLM-L6-v2 model is 22MB and runs entirely on CPU. AIBrain works on a laptop, a VPS, a Raspberry Pi — any machine with Python installed. The model is downloaded once on first use and cached locally. No GPU, no cloud, no subscription required for the core memory system.

Infrastructure

The brain is a single file

AIBrain stores everything in a single SQLite file running in WAL mode. No database server, no open ports, no Docker required for the core brain. Copy it to move your brain between machines. Export it to the Brain Marketplace as a portable .db file. The simplest architecture that could possibly work.

Infrastructure

Three retrieval strategies, one router

AIBrain retrieves memories three ways: semantic search (MiniLM embeddings in SQLite-vec), keyword search (FTS5 — zero ML, often fastest), and hybrid routing (SelRoute picks the best strategy per query, or fuses strategies via reciprocal rank fusion). You don't choose — SelRoute does it automatically based on query characteristics.

Design

Agent-agnostic by design

The brain layer — memory, retrieval, consolidation — is completely separate from the model layer. AIBrain works with Claude, OpenAI GPT, DeepSeek, Ollama, or any LLM with a text interface. There is no Claude dependency for the core memory system. Bring your own model; AIBrain provides the memory.

Benchmark figures from benchmarks/exp37_full_results.json — LongMemEval_M (n=470 queries), LongMemEval Oracle, LoCoMo.