Reflex-based memory system that retrieves through activation, not search. Multi-hop reasoning. Causal chains. Temporal awareness. Like a brain, not a database.
RAG treats memory as a search engine. NeuralMemory treats it as a brain.
| Aspect | RAG / Vector Search | NeuralMemory |
|---|---|---|
| Model | Search Engine | Human Brain |
| Query | "Find similar text" | "Recall through association" |
| Structure | Flat chunks + embeddings | Neural graph + typed synapses |
| Relationships | None (just cosine similarity) | CAUSED_BY, LEADS_TO, IS_A, ... |
| Temporal | Timestamp filter | Time as first-class neurons |
| Multi-hop | Multiple queries needed | Natural graph traversal |
Example: "Why did Tuesday's outage happen?"
RAG returns:
"JWT caused outage" (missing why JWT was used)
NeuralMemory traces:
Trail-based retrieval through fiber pathways with conductivity. Memories that fire together, wire together.
Hebbian binding detects neurons activated by multiple sources. Emergent pattern recognition.
Time neurons as primary anchors. "What happened last Tuesday?" just works.
20 synapse types: CAUSED_BY, LEADS_TO, IS_A, CONTAINS, CO_OCCURS. Not just "related".
Export, import, merge brains. Share learned patterns between agents like Git repos.
Teach brains to understand database schemas. Tables become neurons, FKs become synapses.
# Store memories
$ nmem remember "Fixed auth bug in login.py:42"
$ nmem remember "Use PostgreSQL" --type decision
# Recall through association
$ nmem recall "auth bug fix"
# → "Fixed auth bug in login.py:42"
# Inject context into AI
$ nmem context --limit 10 --json
// ~/.claude/mcp_servers.json
{
"neural-memory": {
"command": "nmem-mcp"
}
}
// Then just ask naturally:
// "What did we decide about caching?"
// "Recall the auth bug fix"
// "What happened last Tuesday?"
NeuralMemory vs cosine-similarity vector search on query types that matter for AI agents.
| Query Type | Vector Search | NeuralMemory |
|---|---|---|
| Direct fact recall | ~90% | ~92% |
| Multi-hop reasoning (A→B→C) | ~30% | ~85% |
| Causal chain ("why did X happen?") | ~15% | ~80% |
| Temporal query ("what happened Tuesday?") | ~10% | ~88% |
| Associative recall ("related to auth") | ~50% | ~90% |
Synthetic benchmark: 15 memories, 8 queries. Accuracy = correct information retrieved in top-3 results.
Text is parsed into neurons (entities, events, concepts) connected by typed synapses. Time anchors are created automatically.
Like sleep for the brain. Cross-links related memories, strengthens important paths, prunes weak connections.
Query activates matching neurons. Activation spreads through synapses. The strongest path = the answer. Multi-hop is free.
20 MCP tools. CLI. Python API. VS Code extension.
Install in 10 seconds. First memory in 30 seconds. Multi-hop reasoning in 60 seconds.