Metadata-Version: 2.4
Name: layered-memory-mcp
Version: 0.2.0
Summary: Layered Memory MCP Server — Extend AI agent memory beyond token limits with a 4-tier knowledge architecture
Project-URL: Homepage, https://github.com/LAIguapi/layered-memory-mcp
Project-URL: Repository, https://github.com/LAIguapi/layered-memory-mcp
Project-URL: Bug-Tracker, https://github.com/LAIguapi/layered-memory-mcp/issues
Project-URL: PyPI, https://pypi.org/project/layered-memory-mcp/
Author: LAIguapi
License-Expression: MIT
License-File: LICENSE
Keywords: ai-agent,knowledge-management,layered-memory,mcp,memory
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.10
Requires-Dist: fastmcp>=2.0.0
Provides-Extra: dev
Requires-Dist: pytest-asyncio>=0.21; extra == 'dev'
Requires-Dist: pytest>=7.0; extra == 'dev'
Description-Content-Type: text/markdown

# Layered Memory MCP Server

> Extend AI agent memory beyond token limits with a 4-tier knowledge architecture.

[**中文**](README.zh-CN.md) | [**日本語**](README.ja.md) | [**한국어**](README.ko.md)

[![PyPI version](https://img.shields.io/pypi/v/layered-memory-mcp.svg)](https://pypi.org/project/layered-memory-mcp/)
[![MCP Compatible](https://img.shields.io/badge/MCP-Compatible-blue)](https://modelcontextprotocol.io)
[![Python 3.10+](https://img.shields.io/badge/Python-3.10+-green)](https://python.org)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](LICENSE)

## The Problem

AI agents have **limited memory** — typically 2-4KB of persistent context injected every turn. Once it's full, the agent forgets everything else. You can't store project configurations, user preferences, API conventions, or domain knowledge without constantly fighting the space limit.

## The Solution

**Layered Memory** organizes knowledge into 4 tiers, trading immediacy for capacity:

```
┌─────────────────────────────────────────────────────┐
│  L0 — Index Layer (2-4KB, injected every turn)      │
│  Pure pointers: "what knowledge exists and where"    │
├─────────────────────────────────────────────────────┤
│  L1 — Knowledge Files (unlimited, loaded on-demand)  │
│  Structured markdown: configs, conventions, facts    │
├─────────────────────────────────────────────────────┤
│  L2 — Skills Layer (loaded when needed)              │
│  Procedures, workflows, tool-specific knowledge      │
├─────────────────────────────────────────────────────┤
│  L3 — Raw Sessions (searched rarely)                 │
│  Full conversation history, searchable by keyword    │
└─────────────────────────────────────────────────────┘
```

**L0 is your table of contents. L1 is your bookshelf. L2 is your cookbook. L3 is your diary.**

## Features

- **Keyword Search** — Find relevant knowledge across all L1 files with relevance scoring
- **Session Scanning** — Extract knowledge candidates from recent agent sessions
- **Space Analytics** — Monitor memory usage and get optimization suggestions
- **Agent Agnostic** — Works with any MCP-compatible agent (Hermes, Claude, Cursor, etc.)
- **Zero Dependencies** — Core engine uses only Python stdlib; only `fastmcp` for MCP transport
- **Privacy First** — All data stays local, no external API calls

## Quick Start

### Install

```bash
pip install layered-memory-mcp
```

### Hermes Agent

Add to `~/.hermes/config.yaml`:

```yaml
mcp_servers:
  layered-memory:
    command: layered-memory-mcp
    timeout: 30
```

### OpenClaw

Install the MCP server, then register it:

```bash
pip install layered-memory-mcp

# Register as an MCP server
openclaw mcp set layered-memory --command layered-memory-mcp
```

Layered Memory complements OpenClaw's built-in vector-based memory:
- **OpenClaw memory**: semantic search over session transcripts (heavy, needs embeddings)
- **Layered Memory**: structured keyword search over curated knowledge files (light, instant)
- Use both: OpenClaw for "what did I say about X?" and Layered Memory for "what's the database connection string?"

### Claude Desktop

Add to your Claude Desktop MCP config:

```json
{
  "mcpServers": {
    "layered-memory": {
      "command": "layered-memory-mcp"
    }
  }
}
```

### Cursor / Other MCP Clients

```bash
# stdio mode (default)
layered-memory-mcp

# HTTP mode
layered-memory-mcp --transport http --port 8080

# Verbose logging
layered-memory-mcp --verbose
```

### Environment Variables

| Variable | Description | Default |
|----------|-------------|---------|
| `LAYERED_MEMORY_HOME` | Root directory for memory data | `~/.layered-memory/` |
| `LAYERED_MEMORY_SESSIONS_DIR` | Agent sessions directory (auto-detected) | `~/.hermes/sessions/` |

## Usage

### 1. Initialize Knowledge Base

Create markdown files in `~/.layered-memory/knowledge/`:

```bash
mkdir -p ~/.layered-memory/knowledge
```

Create your first knowledge file:

```markdown
<!-- ~/.layered-memory/knowledge/infrastructure.md -->
## Server Configuration
- Production server: prod.example.com (port 22)
- Staging server: stage.example.com
- Deploy via: `./deploy.sh --env production`

## Database
- PostgreSQL 15 on prod-db:5432
- Connection pool: 20 max connections
```

### 2. Build L0 Index

In your agent's persistent memory (the 2-4KB injected every turn), store only pointers:

```
[L0] infrastructure: server config, DB, deploy → knowledge/infrastructure.md
[L0] api-conventions: REST patterns, auth, errors → knowledge/api-conventions.md
[L0] user-prefs: coding style, tool preferences → knowledge/user-prefs.md
```

### 3. Search Knowledge (MCP Tool)

The agent calls `recall_knowledge` when it needs details:

```
Agent: "What's the database connection string?"
→ recall_knowledge(keyword="database")
← Returns relevant sections from infrastructure.md
```

### 4. Session Compression (Cron Job)

Set up a daily cron to extract new knowledge from conversations:

```
1. scan_recent_sessions → get session summaries
2. AI analyzes summaries → identifies stable facts
3. New facts → written to L1 knowledge files
4. L0 index → updated with new pointers
```

## MCP Tools

| Tool | Description |
|------|-------------|
| `recall_knowledge` | Search L1 knowledge files by keyword |
| `scan_recent_sessions` | Scan recent sessions for knowledge candidates |
| `get_knowledge_file` | Read a specific knowledge file |
| `list_memory_stats` | Get space statistics and optimization suggestions |
| `search_sessions_by_keyword` | Search session history for a keyword |

## MCP Resources

| Resource | Description |
|----------|-------------|
| `memory://status` | Overall system status and configuration |
| `knowledge://files` | List all knowledge files with metadata |

## MCP Prompts

| Prompt | Description |
|--------|-------------|
| `knowledge_compression_prompt` | Template for AI-driven knowledge extraction from sessions |

## Architecture Deep Dive

### Why 4 Tiers?

| Tier | Cost | Capacity | Use Case |
|------|------|----------|----------|
| L0 (Index) | Tokens per turn | ~2KB | Quick lookup table |
| L1 (Knowledge) | 1 file read | Unlimited | Structured facts |
| L2 (Skills) | 1 skill load | Unlimited | Procedures |
| L3 (Sessions) | Full search | Unlimited | Historical recall |

### Relevance Scoring

When you call `recall_knowledge`, files are scored by:

1. **Filename match** (+10 points) — keyword appears in filename
2. **Heading match** (+3 points) — keyword appears in a `## heading`
3. **Content frequency** (+0.5 per occurrence, capped at 5) — how often keyword appears

Results are sorted by score, and only matching `## sections` are returned (not entire files).

### Session Compression

The `scan_recent_sessions` tool is designed for cron-job automation:

1. It scans session files from the past N days
2. Extracts user messages, assistant topics, and tool calls
3. Returns a structured JSON for an AI to analyze
4. The AI identifies stable knowledge and writes it to L1 files

This creates a **self-improving memory system** — the agent gets smarter over time as more knowledge is distilled from conversations.

## Agent Compatibility

Layered Memory is an MCP server — it works with any MCP-compatible agent.

| Agent | Config Method | Notes |
|-------|--------------|-------|
| **Hermes Agent** | `config.yaml` → `mcp_servers` | Native MCP client, zero config |
| **OpenClaw** | `openclaw mcp set` | Complements built-in vector memory |
| **Claude Desktop** | `claude_desktop_config.json` | Full MCP support |
| **Cursor** | Settings → MCP | Full MCP support |
| **Codex CLI** | Codex MCP config | Full MCP support |
| **Any MCP client** | stdio or HTTP transport | Standard MCP protocol |

### When to use Layered Memory vs. built-in memory

Most agents have **limited persistent memory** (2-4KB per turn). Layered Memory solves this by:

1. **Separating index from content** — L0 stays small (fits in agent memory), L1 holds unlimited knowledge
2. **On-demand loading** — the agent only reads what it needs, when it needs it
3. **Self-improving** — session compression automatically extracts new knowledge over time

### Integration patterns

```
Agent (2KB memory limit)
  └── L0 index (injected every turn, ~500 bytes)
        ├── [L0] infrastructure: servers, DB → knowledge/infrastructure.md
        ├── [L0] api: REST conventions → knowledge/api-conventions.md
        └── [L0] dev: code style, testing → knowledge/development.md
              │
              ↓ (on demand via recall_knowledge)
        L1 knowledge files (unlimited, loaded by keyword)
```

## Development

```bash
# Clone
git clone https://github.com/LAIguapi/layered-memory-mcp.git
cd layered-memory-mcp

# Install in dev mode
pip install -e ".[dev]"

# Run tests
pytest

# Run locally
python -m layered_memory_mcp.server
```

## License

MIT License — see [LICENSE](LICENSE) for details.
