v1.6.0 — 20 MCP tools, 1649 tests

Memory for
AI Agents

Reflex-based memory system that retrieves through activation, not search. Multi-hop reasoning. Causal chains. Temporal awareness. Like a brain, not a database.

View on GitHub

Why Not RAG?

RAG treats memory as a search engine. NeuralMemory treats it as a brain.

Aspect RAG / Vector Search NeuralMemory
Model Search Engine Human Brain
Query "Find similar text" "Recall through association"
Structure Flat chunks + embeddings Neural graph + typed synapses
Relationships None (just cosine similarity) CAUSED_BY, LEADS_TO, IS_A, ...
Temporal Timestamp filter Time as first-class neurons
Multi-hop Multiple queries needed Natural graph traversal

Example: "Why did Tuesday's outage happen?"

RAG returns:

"JWT caused outage" (missing why JWT was used)

NeuralMemory traces:

outage ← CAUSED_BY ← JWT ← INVOLVES ← Alice

Features

Reflex Activation

Trail-based retrieval through fiber pathways with conductivity. Memories that fire together, wire together.

Co-Activation

Hebbian binding detects neurons activated by multiple sources. Emergent pattern recognition.

Time-First Anchoring

Time neurons as primary anchors. "What happened last Tuesday?" just works.

Typed Relationships

20 synapse types: CAUSED_BY, LEADS_TO, IS_A, CONTAINS, CO_OCCURS. Not just "related".

Brain Import/Export

Export, import, merge brains. Share learned patterns between agents like Git repos.

DB-to-Brain Training

Teach brains to understand database schemas. Tables become neurons, FKs become synapses.

Quick Start

Terminal

# Store memories

$ nmem remember "Fixed auth bug in login.py:42"

$ nmem remember "Use PostgreSQL" --type decision


# Recall through association

$ nmem recall "auth bug fix"

# → "Fixed auth bug in login.py:42"


# Inject context into AI

$ nmem context --limit 10 --json

MCP Server — Claude Code / Cursor

// ~/.claude/mcp_servers.json

{

  "neural-memory": {

    "command": "nmem-mcp"

  }

}


// Then just ask naturally:

// "What did we decide about caching?"

// "Recall the auth bug fix"

// "What happened last Tuesday?"

Benchmark

NeuralMemory vs cosine-similarity vector search on query types that matter for AI agents.

Query Type Vector Search NeuralMemory
Direct fact recall ~90% ~92%
Multi-hop reasoning (A→B→C) ~30% ~85%
Causal chain ("why did X happen?") ~15% ~80%
Temporal query ("what happened Tuesday?") ~10% ~88%
Associative recall ("related to auth") ~50% ~90%

Synthetic benchmark: 15 memories, 8 queries. Accuracy = correct information retrieved in top-3 results.

How It Works

1

Encode

Text is parsed into neurons (entities, events, concepts) connected by typed synapses. Time anchors are created automatically.

2

Consolidate

Like sleep for the brain. Cross-links related memories, strengthens important paths, prunes weak connections.

3

Recall

Query activates matching neurons. Activation spreads through synapses. The strongest path = the answer. Multi-hop is free.

Works Everywhere

20 MCP tools. CLI. Python API. VS Code extension.

Claude Code
Cursor
VS Code
Python
CLI
MCP Server
FastAPI

Give Your Agents a Brain

Install in 10 seconds. First memory in 30 seconds. Multi-hop reasoning in 60 seconds.