Metadata-Version: 2.4
Name: shelfai
Version: 0.2.0a1
Summary: Lightweight, file-based context management for AI agents. Your agent writes better abstracts than your RAG pipeline.
Author: ShelfAI Contributors
License-Expression: Apache-2.0
Project-URL: Homepage, https://github.com/noahbfreedman-cloud/ShelfAI
Project-URL: Repository, https://github.com/noahbfreedman-cloud/ShelfAI
Project-URL: Issues, https://github.com/noahbfreedman-cloud/ShelfAI/issues
Project-URL: Documentation, https://github.com/noahbfreedman-cloud/ShelfAI/docs
Keywords: ai,agents,context,memory,rag,llm,knowledge-management,session-management
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: typer>=0.9.0
Requires-Dist: pyyaml>=6.0
Requires-Dist: rich>=13.0
Requires-Dist: python-frontmatter>=1.0
Requires-Dist: filelock>=3.12
Provides-Extra: anthropic
Requires-Dist: anthropic>=0.40.0; extra == "anthropic"
Provides-Extra: openai
Requires-Dist: openai>=1.0.0; extra == "openai"
Provides-Extra: all
Requires-Dist: anthropic>=0.40.0; extra == "all"
Requires-Dist: openai>=1.0.0; extra == "all"
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Requires-Dist: pytest-cov>=4.0; extra == "dev"
Requires-Dist: ruff>=0.4.0; extra == "dev"
Dynamic: license-file

# 📚 ShelfAI

> **🚧 Experimental** — ShelfAI is in active development. The core is battle-tested on a 25-agent production swarm, but the public API may change. We're releasing early for community feedback. [Open an issue](https://github.com/noahbfreedman-cloud/ShelfAI/issues) or join the conversation.

**Most frameworks optimize retrieval and persistence. ShelfAI optimizes the source files they retrieve from.**

Every agent framework treats skill files as flat text — you either load the whole thing or nothing. Retrieval systems find the right file, but nobody optimizes what's *inside* it. Skills accumulate with no pruning, no chunking, no internal structure for selective loading.

ShelfAI is the document-ops layer for agent context. It applies RAG architecture principles — abstracts, semantic chunking, titled sections — to agent skill files, so an LLM can skim an abstract, decide relevance, and load only the chunks it needs. An agent maintains the files over time, handling restructuring and pruning automatically.

The pattern comes from thinking about how we structure a medical RAG system we're building in partnership with the University of Coimbra — tiered retrieval, agent-written abstracts, structured chunking. We asked: why doesn't agent memory work this way? So we built ShelfAI.

```bash
pip install shelfai
```

[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](LICENSE)
[![Python](https://img.shields.io/badge/python-3.9+-green.svg)](https://python.org)
[![CI](https://github.com/noahbfreedman-cloud/ShelfAI/actions/workflows/ci.yml/badge.svg)](https://github.com/noahbfreedman-cloud/ShelfAI/actions/workflows/ci.yml)
[![PyPI](https://img.shields.io/pypi/v/shelfai.svg)](https://pypi.org/project/shelfai)
[![Status](https://img.shields.io/badge/status-experimental-orange.svg)]()

---

## Where ShelfAI Sits

Your agent's memory architecture is sophisticated. The documents it remembers are not. Existing memory systems — [Hermes](https://github.com/NousResearch/hermes-agent), [Honcho](https://honcho.dev), [SuperMemory](https://supermemory.ai), QMD — solve *what* to remember and *when* to retrieve it. ShelfAI solves *how the documents are structured* once you've decided to read one.

```
Raw skill file / knowledge document
        ↓
   [ ShelfAI ]  ← structures, chunks, titles, writes abstracts
        ↓
   Structured document with abstract + semantic chunks
        ↓
   QMD / SuperMemory / Hermes skills / OpenClaw ClawHub
```

ShelfAI is a preprocessing layer that makes retrieval systems work better, not a replacement for any of them.

| Layer | Tool | What It Does |
|-------|------|-------------|
| Search | QMD | Finds the right files. BM25 + vector + LLM reranking, fully local. |
| Structure | **ShelfAI** | Curates what gets searched. Abstracts, chunks, learning loop. |
| Entity Memory | Honcho or Supermemory | Remembers users, projects, facts that change over time. |

---

## The Core Problem

Every agent framework today treats skill files as flat text:

**Skill discovery** — You either match the YAML description or you don't. There's no abstract to help make smarter routing decisions. In Hermes, for example, Level 0 gives you a name + description index (~3k tokens) and Level 1 gives you the full skill. There is no Level 0.5.

**Skill loading** — You read zero lines or all 500. No way to say "give me chunk 3 about error handling." Every irrelevant line burns tokens.

**Skill creation** — The agent writes a flat markdown file. No internal structure optimized for future retrieval. Skills accumulate indefinitely with no pruning, no contradiction detection, no internal navigation.

ShelfAI adds the missing gradient: abstracts for smarter routing, semantic chunks for selective loading, and a learning loop that improves both over time.

---

## How It Works

### The Shelf

Your agent's knowledge lives in a simple directory:

```
shelf/
├── index.md           ← One-line abstracts for everything
├── skills/            ← Agent capabilities and procedures
├── knowledge/         ← Domain-specific reference material
├── memory/            ← Learnings from past sessions
│   ├── user/          ← User preferences
│   └── agent/         ← Operational lessons and patterns
└── resources/         ← Reference materials
```

### The Index

`shelf/index.md` is the only file your agent reads first:

```markdown
# ShelfAI Index

## Skills
- **skills/seo_audit.md** — Use when a client requests a site audit. Covers
  technical crawl, Core Web Vitals, internal linking, and content gaps.
- **skills/lead_nurture.md** — Use when following up with a lead. Includes
  timing rules, email templates, and the 72-hour re-engagement trigger.

## Knowledge
- **knowledge/api_docs.md** — Payments API reference. Key gotcha: staging
  returns 200 with error body, don't trust status codes.

## Memory
- **memory/agent/lessons.md** — Staging needs VPN. Screaming Frog misses
  JS-rendered pages on Client B's site.
```

The agent reads the index, matches abstracts to the task, and loads **only** the matching files. This is the Level 0.5 — richer than a name/description pair, cheaper than loading the full file.

### The Learning Loop

After each conversation, ShelfAI's session agent:

1. Analyzes the transcript
2. Extracts operational lessons, workflow patterns, preference updates
3. Deduplicates against existing knowledge
4. Updates memory files and refines index abstracts
5. QMD re-indexes — better abstracts mean better search next time

```
Session happens
    │
    ├─→ ShelfAI session agent extracts operational lessons
    │     → Updates memory files
    │     → Refines abstracts
    └─→ QMD re-indexes the updated shelf
          → Better abstracts = better reranking

Next session
    ├─→ QMD finds more relevant files
    └─→ ShelfAI provides richer, curated context

Agent performs better → richer sessions → better extractions → loop
```

**The key insight:** Your agent uses your context daily. It knows which details matter, which skills get called when, which gotchas keep tripping things up. An agent that *uses* the context writes better retrieval abstracts than a model that just *summarizes* it.

---

## Agent File Chunking

Monolithic agent instruction files (150-400+ lines) waste tokens loading instructions irrelevant to the current task. ShelfAI's chunking system splits them into modular, selectively-loaded chunks — reducing per-run token cost by ~60%.

### Two-Layer Approach

1. **Heuristic pre-filter (free):** `shelfai chunk` extracts soul/rules/read-order into always-loaded chunks. Handles the ~35% of chunking that's structurally obvious. Zero LLM cost, safe to run anytime.

2. **LLM semantic pass (~$0.01/agent):** The session agent groups remaining sections by deliverable/workflow. Triggered on a weekly cadence. The LLM has full latitude to say "no change needed" when a chunk's size serves the deliverable.

### Chunk Structure

```
agents/{id}/
├── AGENT.md              # Thin router (~40 lines) — maps tasks to chunks
├── MEMORY.md             # Learned patterns
├── chunks/
│   ├── soul.md           # Always loaded — mission, role, identity
│   ├── rules.md          # Always loaded — hard constraints
│   ├── read-order.md     # Always loaded — system integration, data sources
│   ├── {task-1}.md       # Loaded when task matches
│   └── {task-2}.md       # Loaded when task matches
```

### Chunking CLI

```bash
# Scan for agents that need chunking
shelfai chunk-scan ./agents

# Preview the pre-filter on a specific agent
shelfai chunk ./agents/18-efficiency/AGENT.md --dry-run

# Write chunk files (backs up original as AGENT.md.pre-chunk)
shelfai chunk ./agents/18-efficiency/AGENT.md --write
```

| Class | When to Load | Examples |
|-------|-------------|---------|
| `always` | Every run | soul, rules, read-order, MEMORY.md |
| `task` | Current task matches | tiktok, blog-article, daily-scorecard |
| `schedule` | Time-triggered | weekly-report (Mondays), monthly-review |
| `reference` | On demand or searched | scoring-formulas, tool-setup |

---

## Quick Start

```bash
# Install
pip install shelfai

# Initialize a shelf
shelfai init --template agent

# Add your agent's knowledge
shelfai add ./my_playbook.md --category skills
shelfai add ./api_docs.md --category knowledge

# Build the index (manual = best quality, auto = faster)
shelfai index --manual    # You write abstracts with retrieval hints
# OR
shelfai index             # Auto-generate abstracts with LLM (~$0.01)

# Register with QMD
qmd collection add ./shelf --name shelf
qmd embed

# After each conversation, extract learnings
shelfai session ./transcript.md
qmd embed                 # Re-index so QMD sees the updates
```

That's it. Your agent now has a knowledge base that improves after every conversation.

### Use with Your Agent

```python
from shelfai import Shelf

shelf = Shelf("./shelf")

def run_task(task: str):
    # Find relevant context
    relevant = shelf.index.search(task)
    context = "\n".join(shelf.read_file(e.file_path) for e in relevant)
    lessons = shelf.read_file("memory/agent/lessons.md", default="")

    return run_agent(f"{context}\n\n{lessons}", task)
```

### Post-Session Learning

```python
from shelfai import Shelf
from shelfai.agents.session import SessionManager
from shelfai.providers.anthropic import AnthropicProvider

shelf = Shelf("./shelf")
provider = AnthropicProvider()
manager = SessionManager(shelf, provider)

# After conversation ends
report = manager.process_file("transcript.md")
print(f"Extracted {report.extraction.total_items} learnings")

# Re-index so QMD sees the updates
import subprocess
subprocess.run(["qmd", "embed"])
```

---

## Memory Compaction

Memory files grow as the session agent appends lessons after each conversation. Without compaction, they accumulate duplicates, superseded entries, and stale observations that dilute context quality.

`shelfai compact` consolidates memory files using heuristic dedup — no LLM needed, safe to run anytime.

```bash
# Scan all memory files (shelf + agent MEMORY.md)
shelfai compact --shelf ./shelf --agents ./agents --scan

# Preview compaction on a specific file
shelfai compact --file ./shelf/memory/agent/what-works.md

# Apply compaction (backs up originals as .pre-compact)
shelfai compact --shelf ./shelf --agents ./agents --write
```

Removes near-duplicate entries, strips placeholders, archives entries older than 90 days (configurable via `--stale-days`). Preserves file structure, headings, and tables. Backs up originals before writing.

---

## Integrations

ShelfAI is framework-agnostic. It manages markdown files. Works with any agent runtime.

| Framework | Integration | Status |
|-----------|------------|--------|
| **Claude Code** | Claude skill (`skills/claude/`) | ✅ Shipped |
| [**Hermes Agent**](https://github.com/NousResearch/hermes-agent) | Post-run hook + skill structuring (`examples/hermes_integration.py`) | 📖 Example |
| [**OpenClaw**](https://github.com/all-hands-ai/OpenHands) | ClawHub skill packaging (`examples/openclaw_integration.py`) | 📖 Example |
| **QMD** | Direct — ShelfAI curates what QMD indexes | ✅ Shipped |
| **Honcho** | Complementary — ShelfAI handles ops knowledge, Honcho handles entity memory | Compatible |
| **SuperMemory** | Complementary — ShelfAI structures docs before ingestion | Compatible |

See `examples/` for integration guides.

---

## Built in Production

ShelfAI was built because we needed it. We run a 25-agent content swarm (17 pipeline + 8 oversight) and hit every failure mode:

- **Memory bloat:** 37 memory files, 207 entries with 25 near-duplicates poisoning retrieval. `shelfai compact` cleaned them in one pass.
- **Monolithic configs:** Agent files exceeding 400 lines, burning tokens on irrelevant instructions every run. `shelfai chunk` split them into task-specific modules. ~60% token savings.

---

## CLI Reference

| Command | Description |
|---------|-------------|
| `shelfai init` | Initialize a new shelf |
| `shelfai add <file>` | Add a file or URL to the shelf |
| `shelfai index` | Build/rebuild the index (generate abstracts) |
| `shelfai session <file>` | Run session agent on a transcript |
| `shelfai search <query>` | Test abstract matching |
| `shelfai status` | Show shelf health and stats |
| `shelfai prune` | Clean up stale memory entries |
| `shelfai export` | Export shelf as a single file |
| `shelfai chunk-scan <dir>` | Scan agents directory for chunking candidates |
| `shelfai chunk <file>` | Run heuristic pre-filter on a monolithic agent file |
| `shelfai compact` | Consolidate memory files (dedup, archive stale) |
| `shelfai review` | List or approve staged new-context proposals |

---

## Cost

| Component | Cost |
|-----------|------|
| ShelfAI | **$0** + ~$0.02/session for LLM calls |
| QMD | **$0** — fully local |
| Honcho | **$0** — open source (or hosted) |
| Supermemory | Free tier or $19/mo |

---

## Philosophy

1. **Files beat databases** for human-scale knowledge. If you can `ls` it, you understand it.
2. **Agents write better indexes.** The thing that uses the context should write the retrieval abstracts.
3. **Transparency beats magic.** When retrieval fails, open the file and read why.
4. **Zero infrastructure is the default.** Scale up when you need to, not because your tools demand it.

---

## Project Status

**v0.2.0-alpha (Experimental)**

Core Features:
- [x] Core CLI (init, add, index, session, search, status, export, prune, review)
- [x] Session management agent (5-stage pipeline, schema validation, backups)
- [x] Auto-indexing with LLM providers (Anthropic, OpenAI)
- [x] Production hardening (path traversal protection, file locking, retry logic)
- [x] Agent file chunking (chunk-scan + chunk commands, two-layer architecture)
- [x] Memory compaction (heuristic dedup, stale archival, placeholder cleanup)
- [x] Integrations: Claude skill, Hermes Agent (example), OpenClaw (example)
- [x] 100 tests passing ✓ Apache 2.0 licensed

Roadmap:
- [ ] `shelfai register --qmd` (one-command QMD setup)
- [ ] MCP server implementation
- [ ] Watch mode (auto-index on file changes)
- [ ] Shelf templates (customer support, content production, analysis, sales)

---

## Contributing

We welcome contributions. ShelfAI is early-stage and there's a lot of surface area.

Areas where help is especially welcome: shelf templates for specific domains, real-world case studies, integration examples for your framework, benchmarks (token cost reductions, retrieval quality), and the `shelfai register --qmd` CLI command.

See [CONTRIBUTING.md](CONTRIBUTING.md) for guidelines. If you want to contribute before formal guidelines are up, just open an issue — we're friendly.

---

## License

Apache License 2.0 — see [LICENSE](LICENSE) for details.
