Metadata-Version: 2.4
Name: dhee
Version: 6.1.0
Summary: Dhee Developer Brain — local memory, handoff, and git-backed context for AI coding agents
Author: Sankhya AI Labs
License: MIT
Project-URL: Homepage, https://github.com/Sankhya-AI/Dhee
Project-URL: Repository, https://github.com/Sankhya-AI/Dhee
Project-URL: Issues, https://github.com/Sankhya-AI/Dhee/issues
Project-URL: Documentation, https://github.com/Sankhya-AI/Dhee#readme
Project-URL: Changelog, https://github.com/Sankhya-AI/Dhee/blob/main/CHANGELOG.md
Keywords: memory-layer,cognition,mcp,self-evolving,hyperagent,ai,agents,plugin,llm,edge
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: pydantic>=2.0
Requires-Dist: requests>=2.28.0
Requires-Dist: pyyaml>=6.0
Requires-Dist: cryptography>=42.0.0
Provides-Extra: gemini
Requires-Dist: google-genai>=1.0.0; extra == "gemini"
Provides-Extra: openai
Requires-Dist: openai>=1.0.0; extra == "openai"
Provides-Extra: ollama
Requires-Dist: ollama>=0.4.0; extra == "ollama"
Provides-Extra: nvidia
Requires-Dist: openai>=1.0.0; extra == "nvidia"
Provides-Extra: zvec
Requires-Dist: zvec>=0.2.1; extra == "zvec"
Provides-Extra: sqlite-vec
Requires-Dist: sqlite-vec>=0.1.1; extra == "sqlite-vec"
Provides-Extra: local
Requires-Dist: llama-cpp-python>=0.3; extra == "local"
Requires-Dist: sentence-transformers>=3.0; extra == "local"
Provides-Extra: mcp
Requires-Dist: mcp>=1.0.0; extra == "mcp"
Provides-Extra: api
Requires-Dist: fastapi>=0.100.0; extra == "api"
Requires-Dist: uvicorn>=0.20.0; extra == "api"
Provides-Extra: bus
Requires-Dist: engram-bus>=0.1.0; extra == "bus"
Provides-Extra: benchmarks
Requires-Dist: huggingface_hub>=0.24.0; extra == "benchmarks"
Provides-Extra: edge
Requires-Dist: onnxruntime>=1.16; extra == "edge"
Provides-Extra: training
Requires-Dist: unsloth; extra == "training"
Requires-Dist: datasets>=2.0; extra == "training"
Requires-Dist: trl>=0.7; extra == "training"
Requires-Dist: peft>=0.6; extra == "training"
Provides-Extra: app
Requires-Dist: openai>=1.0.0; extra == "app"
Requires-Dist: google-genai>=1.0.0; extra == "app"
Requires-Dist: ollama>=0.4.0; extra == "app"
Provides-Extra: all
Requires-Dist: google-genai>=1.0.0; extra == "all"
Requires-Dist: openai>=1.0.0; extra == "all"
Requires-Dist: ollama>=0.4.0; extra == "all"
Requires-Dist: mcp>=1.0.0; extra == "all"
Requires-Dist: fastapi>=0.100.0; extra == "all"
Requires-Dist: uvicorn>=0.20.0; extra == "all"
Requires-Dist: engram-bus>=0.1.0; extra == "all"
Requires-Dist: huggingface_hub>=0.24.0; extra == "all"
Requires-Dist: llama-cpp-python>=0.3; extra == "all"
Requires-Dist: sentence-transformers>=3.0; extra == "all"
Provides-Extra: dev
Requires-Dist: pytest>=7.0.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.23.0; extra == "dev"
Requires-Dist: openai>=1.0.0; extra == "dev"
Requires-Dist: huggingface_hub>=0.24.0; extra == "dev"
Requires-Dist: build>=1.0.0; extra == "dev"
Requires-Dist: twine>=5.0.0; extra == "dev"
Dynamic: license-file

<p align="center">
  <img src="docs/dhee-logo.png" alt="Dhee" width="80">
</p>

<h1 align="center">Dhee — the information layer for collaborating AI agents</h1>

<h3 align="center">Local memory, shared learnings, and context routing for Hermes, Claude Code, Codex, Cursor, Gemini CLI, Aider, Cline, and any MCP client.</h3>

<p align="center">
  Dhee is the information layer through which your agents collaborate. When one agent creates a reusable learning, Dhee captures it as a candidate; once promoted, every connected agent can use it.
</p>

<p align="center">
  <a href="https://pypi.org/project/dhee"><img src="https://img.shields.io/pypi/v/dhee?style=flat-square&color=orange" alt="PyPI"></a>
  <a href="https://pypi.org/project/dhee"><img src="https://img.shields.io/badge/python-3.9%2B-blue.svg?style=flat-square" alt="Python 3.9+"></a>
  <a href="https://github.com/Sankhya-AI/Dhee/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue.svg?style=flat-square" alt="MIT License"></a>
  <a href="benchmarks/longmemeval/"><img src="https://img.shields.io/badge/LongMemEval-%231%20recall%20%E2%80%94%20R%401%2094.8%25-brightgreen.svg?style=flat-square" alt="#1 on LongMemEval recall"></a>
</p>

<p align="center">
  <b>#1 on LongMemEval retrieval</b> — R@1 <b>94.8%</b> · R@5 <b>99.4%</b> · R@10 <b>99.8%</b> on the full 500-question set. <a href="#benchmarks">Reproduce it →</a>
</p>

<p align="center">
  <img src="docs/demo/demo.gif" alt="Dhee demo — fat skills, thin tokens, self-tuning retrieval" width="900">
</p>

<p align="center">
  <a href="#what-is-dhee">What is Dhee</a> ·
  <a href="#shared-agent-learning">Shared Agent Learning</a> ·
  <a href="#dheefs">DheeFS</a> ·
  <a href="#quick-start">Quick Start</a> ·
  <a href="#repo-shared-context">Repo-Shared Context</a> ·
  <a href="#benchmarks">Benchmarks</a> ·
  <a href="#how-it-works">How It Works</a> ·
  <a href="#vs-alternatives">vs Alternatives</a> ·
  <a href="#integrations">Integrations</a>
</p>

---

## What is Dhee?

**Dhee is the local information layer through which your agents collaborate.** It runs on your machine, uses SQLite, plugs into Hermes, Claude Code, Codex, and any MCP client, and does four jobs the model can't do for itself:

1. **🧠 Remembers.** Doc chunks, decisions, what worked, what failed, user preferences. Ebbinghaus decay pushes stale knowledge out of the hot path; frequently-used memory gets promoted. Per-turn context stays bounded and relevant instead of becoming another giant prompt file.

2. **🔁 Routes.** A 10 MB `git log` becomes a compact digest with a pointer. Raw output only re-enters context when the model explicitly expands it. On heavy tool-output calls, this is where the 90%+ token reduction comes from.

3. **🌱 Shares learnings.** Hermes memory, session traces, and agent-created skills flow into Dhee as auditable learning candidates. Only promoted learnings appear as "Learned Playbooks" for Claude Code, Codex, Hermes, and any Dhee-enabled agent. No separate middleman agent.

4. **⚙️ Self-tunes.** Dhee watches which digests the model expands and which retrieval depths are useful, then tunes router policy per tool, per intent, per file type. The goal is not a bigger prompt; it is a smaller, better one.

### Who it's for

- **Every Claude Code / Cursor / Codex / Gemini CLI / Aider / Cline user** who has ever hit a context limit or a $200 token bill.
- **Hermes users** who already have a self-evolving agent and want those learnings to make Claude Code and Codex smarter too.
- **Any team** with a 2,000-line `CLAUDE.md`, a Skills library, an `AGENTS.md`, or a prompt library that's "too big for context." Stop pruning. Dhee handles delivery.
- **Anyone who wants their team to share context through git** — the same way they share code.

---

## <span id="shared-agent-learning">Shared Agent Learning — one promoted learning, every agent benefits</span>

Hermes can evolve its own skills and memories. Claude Code has native hooks. Codex has MCP config, `AGENTS.md`, and a persisted session stream. Dhee is the information layer underneath them: it turns separate agent histories into shared, gated context.

```text
Hermes MemoryProvider
  ├─ MEMORY.md / USER.md writes
  ├─ agent-created skills
  ├─ session summaries and outcomes
  └─ self-evolution traces
          │
          ▼
      Dhee Learning Exchange
          │
          ├─ candidate  -> review / evidence / score
          ├─ promoted   -> injected as Learned Playbooks
          └─ rejected   -> auditable, never injected
          │
          ▼
Claude Code · Codex · Hermes · any MCP client
```

What this means in practice:

- Your existing Hermes progress is not stranded inside Hermes. `dhee install` detects Hermes when present, installs Dhee as a Hermes `MemoryProvider` at `~/.hermes/plugins/memory/dhee`, and imports local Hermes memory files, session summaries, and agent-created skills into Dhee.
- Claude Code and Codex do not need to launch Hermes to benefit. They receive promoted Hermes/Dhee learnings through normal Dhee context and MCP tools.
- New Claude Code and Codex outcomes can become Dhee learning candidates too. After promotion, Hermes can read them back through the same provider.
- Candidate learnings are never auto-injected. Trusted Hermes `MEMORY.md` / `USER.md` imports may be promoted during install; Hermes `SOUL.md`, session traces, and agent-created skills stay candidates until explicitly approved or promoted by policy.

This is the product contract: **with Dhee, a learning proven in one agent can become a promoted playbook for every connected agent.**

### Reality check

- **Hermes native:** Dhee integrates as a Hermes `MemoryProvider`, the first-class Hermes memory-plugin surface. Hermes allows one active external memory provider, so V1 replaces Honcho/Mem0/etc. while `memory.provider: dhee` is active.
- **Claude Code native:** Dhee uses Claude Code hooks, MCP, and router enforcement. This is the strongest integration surface.
- **Codex native:** Codex does not expose Claude-style pre-tool hooks here. Dhee uses the closest native Codex surfaces: `~/.codex/config.toml`, global `~/.codex/AGENTS.md`, MCP server instructions, and Codex session-stream auto-sync.
- **Promotion gate:** Imported Hermes skills and session traces are candidates by default. Rejected or archived learnings remain auditable but are excluded from retrieval.

---

## <span id="dheefs">DheeFS — one local learning space every agent already understands</span>

Agents already understand files and shell verbs. DheeFS exposes Dhee's memory, router, handoff, artifacts, shared tasks, and learning exchange as one virtual context space:

```bash
dhee shell "ls /learnings"
dhee shell "cat /handoff/latest.md"
dhee shell "grep parser /learnings/promoted"
dhee shell "cat /router/ptr/R-abc123"
```

The first version is a virtual shell, not FUSE. It intentionally supports a small approved command set: `ls`, `cat`, `grep`, `why`, `promote`, `reject`, `broadcast`, `provision`, and `snapshot`. The same surface is available through MCP as `dhee_shell(command)` and through Python:

```python
from dhee import ContextWorkspace

result = ContextWorkspace(repo=".").execute("provision 'fix parser bug'")
print(result.stdout)
```

External systems such as Slack, Gmail, and Notion are future **context sources** under `/sources`, not generic remote action backends. They can sync and search evidence into Dhee artifacts, learnings, and handoffs without making the core install depend on SaaS SDKs.

```text
/learnings   candidates, promoted, rejected, archived
/handoff     latest repo/session continuity
/router/ptr  raw pointer lookup when explicitly requested
/artifacts   host-parsed files and chunks
/repo        .dhee/context decisions and conventions
/agents      Hermes, Claude Code, Codex views
/shared      inbox, broadcasts, shared task results
/sources     optional future Slack/Gmail/Notion context mounts
```

---

## Quick Start

**One command. No venv. No config. No pasting into `settings.json`.**

```bash
curl -fsSL https://raw.githubusercontent.com/Sankhya-AI/Dhee/main/install.sh | sh
```

The installer creates `~/.dhee/`, installs the `dhee` package, and auto-wires Claude Code, Codex, and Hermes when detected. Open your agent in any project — cognition is on.

<details>
<summary><b>Other install paths</b></summary>

```bash
# Via pip
pip install dhee
dhee install                      # configure supported agent harnesses

# From source
git clone https://github.com/Sankhya-AI/Dhee.git
cd Dhee && ./scripts/bootstrap_dev_env.sh
source .venv-dhee/bin/activate
dhee install
```

</details>

After install, Dhee auto-ingests project docs (`CLAUDE.md`, `AGENTS.md`, `SKILL.md`, etc.) on the first session. Run `dhee ingest` any time to re-chunk.

```bash
dhee install                  # configure local agent harnesses
dhee hermes status            # see whether Hermes is detected and Dhee-backed
dhee hermes sync --dry-run    # preview Hermes memories/skills before import
dhee learn search --include-candidates  # inspect candidates and promotions
dhee link /path/to/repo       # share context with teammates through this repo
dhee context refresh          # refresh repo context after pull/checkout
dhee handoff                  # compact continuity for current repo/session
dhee key set openai           # store a provider key locally (encrypted)
dhee router report            # token-savings stats + replay projection
dhee router tune              # re-tune retrieval policy from usage
```

---

## <span id="repo-shared-context">Repo-Shared Context — git is the sync layer</span>

Most "team memory" tools need a server. Dhee uses the one your team already trusts: **git**.

```bash
dhee link /path/to/repo
```

Dhee creates a tracked folder inside your repo:

```text
<repo>/.dhee/
  config.json
  context/manifest.json
  context/entries.jsonl
```

Commit it. Teammates who pull the repo and have Dhee installed get the **same shared context** — decisions, conventions, what-not-to-do — surfaced into their agent automatically.

Shared context is **append-only and git-friendly**. If two developers edit overlapping context concurrently, Dhee keeps both versions and reports a conflict instead of silently dropping one developer's work. The installed `pre-push` hook blocks unresolved conflicts from leaving the laptop:

```bash
dhee context check --repo /path/to/repo
```

**No hosted service. No org account. Your repo is the team brain.**

---

## Benchmarks

> **#1 on LongMemEval recall.** R@1 **94.8%**, R@5 **99.4%**, R@10 **99.8%** — full 500 questions, no held-out split, no cherry-picking.

| System | R@1 | R@3 | R@5 | R@10 |
|:-------|:----|:----|:----|:-----|
| **Dhee** | **94.8%** | **99.0%** | **99.4%** | **99.8%** |
| [MemPalace](https://github.com/MemPalace/mempalace#benchmarks) (raw) | — | — | 96.6% | — |
| [MemPalace](https://github.com/MemPalace/mempalace#benchmarks) (hybrid v4, held-out 450q) | — | — | 98.4% | — |
| [agentmemory](https://github.com/rohitg00/agentmemory#benchmarks) | — | — | 95.2% | 98.6% |

Stack: NVIDIA `llama-nemotron-embed-vl-1b-v2` embedder + `llama-3.2-nv-rerankqa-1b-v2` reranker, top-k 10.

**Proof is in-tree, not screenshots.** Exact command, metrics, and per-question output live under [`benchmarks/longmemeval/`](benchmarks/longmemeval/). Recompute R@k yourself — any mismatch is a bug you can open.

---

## How It Works

```
                  ┌──────────────────────────────┐
                  │   Your fat context             │
                  │   CLAUDE.md · AGENTS.md ·      │
                  │   SKILL.md · prompts · docs ·  │
                  │   sessions · tool output       │
                  └──────────────┬─────────────────┘
                                 │ ingest once
                                 ▼
       ┌────────────────────────────────────────────────────┐
       │             Dhee · local SQLite brain               │
       │                                                     │
       │  doc chunks · short-term · long-term · insights ·   │
       │  beliefs · policies · intentions · episodes · edits │
       └─────────────────────┬───────────────────────────────┘
                             │
              ┌──────────────┴───────────────┐
              ▼                              ▼
       Session start                    Each user prompt
       (full assembly)                  (matching slice only)
              │                              │
              └──────────────┬───────────────┘
                             ▼
              ┌────────────────────────────┐
              │  Token-budgeted XML         │
              │  <dhee v="1">               │
              │    <doc src="CLAUDE.md"…/>  │
              │    <i>What worked last…</i> │
              │  </dhee>                    │
              └────────────────────────────┘
                             │
                  Model sees only what it
                  needs, when it needs it.
```

On the tool-use side, the **router** digests raw output **at source** — never letting raw `Read`, `Bash`, or subagent results into context unless the model asks.

### The four-operation API

Every interface — hooks, MCP, Python, CLI — exposes the same four operations.

```python
from dhee import Dhee
d = Dhee()
d.remember("User prefers FastAPI over Flask")
d.recall("what framework does this project use?")
d.context("fixing the auth bug")
d.checkpoint("Fixed auth bug", what_worked="git blame first", outcome_score=1.0)
```

| Operation | LLM calls | Cost |
|:----------|:---------:|:----:|
| `remember` / `recall` / `context` | 0 | ~$0.0002 |
| `checkpoint` | 1 per ~10 memories | ~$0.001 |
| **Typical 20-turn Opus session** | **~1** | **~$0.004** |

Dhee overhead: ~$0.004/session. Token savings on the same 20-turn session: **~$0.50+**. **>100× ROI.**

### The router — digest at source

Four MCP tools replace `Read` / `Bash` / `Agent` on heavy calls:

- `dhee_read(file_path, offset?, limit?)` — symbols, head, tail, kind, token estimate + pointer.
- `dhee_bash(command)` — output digested by class (git log, pytest, grep, listing, generic).
- `dhee_agent(text)` — file refs, headings, bullets, error signals from any subagent return.
- `dhee_expand_result(ptr)` — only called when the digest genuinely isn't enough.

A 10 MB `git log --oneline -50000` becomes a ~200-token digest. This is where the serious savings live.

### Self-tuning retrieval

Most memory layers are static: you write rules, they retrieve. Dhee watches what happens and tunes itself.

- **Intent classification.** Every `Read`/`Bash`/`Agent` call is bucketed (source, test, config, doc, data, build). Each bucket gets its own retrieval depth.
- **Expansion ledger.** Every `dhee_expand_result(ptr)` is logged with `(tool, intent, depth)`.
- **Policy tuning.** `dhee router tune` reads the ledger and atomically rewrites `~/.dhee/router_policy.json` — deeper for what gets expanded, shallower for what doesn't.

Frontend-heavy teams get deeper JS/TS digests. Data teams get richer CSV/JSONL summaries. **You don't pick — Dhee picks, based on what you actually expand.**

---

## <span id="vs-alternatives">vs alternatives</span>

|  | **Dhee** | CLAUDE.md | Mem0 | Letta | MemPalace | agentmemory |
|:--|:-:|:-:|:-:|:-:|:-:|:-:|
| **Tokens / turn** | **~300** | 2,000+ | varies | ~1K+ | varies | ~1,900 |
| **LongMemEval R@5** | **99.4%** | — | — | — | 96.6% | 95.2% |
| **Self-tuning retrieval** | **Yes** | No | No | No | No | No |
| **Hermes → Claude/Codex learning exchange** | **Yes** | No | No | No | No | No |
| **Auto-digest tool output** | **Yes** | No | No | No | No | No |
| **Git-shared team context** | **Yes** | Manual | No | No | No | No |
| **Works across MCP agents** | **Yes** | No | Partial | No | Yes | Yes |
| **External DB required** | No (SQLite) | No | Qdrant/pgvector | Postgres+vector | No | No |
| **License** | MIT | — | Apache-2 | Apache-2 | MIT | MIT |

Dhee combines **token reduction, reproducible recall benchmarks, self-tuning retrieval policy, git-shared team context, and promoted cross-agent learning** in one local-first collaboration layer.

---

## Integrations

### Hermes Agent — native MemoryProvider

```bash
dhee install                  # detects Hermes and enables Dhee when present
dhee hermes status
dhee hermes sync --dry-run
```

Dhee installs as the Hermes memory provider, mirrors Hermes memory writes, imports local Hermes memory files, and checkpoints Hermes sessions into Dhee learning candidates. Curated `MEMORY.md` / `USER.md` imports can be promoted on install; `SOUL.md`, session traces, and agent-created skills stay gated. Promoted playbooks flow back into Hermes through the provider and out to Claude Code/Codex through Dhee context.

### Claude Code — native hooks

```bash
pip install dhee && dhee install
```

Six lifecycle hooks fire at the right moments. Claude Code gets Dhee handoff, shared tasks, inbox broadcasts, learned playbooks, and router enforcement for heavy `Read`/`Bash`/`Grep` calls.

### Codex — closest native surface

```bash
pip install dhee && dhee install --harness codex
dhee harness status --harness codex
```

Dhee writes `~/.codex/config.toml`, manages a global `~/.codex/AGENTS.md` block, advertises context-first MCP instructions, and tails Codex session logs on Dhee calls. Codex does not currently expose Claude-style pre-tool hooks, so this is the strongest truthful native integration available.

### MCP server — Cursor, Gemini CLI, Cline, Goose, anything MCP

```json
{
  "mcpServers": {
    "dhee": { "command": "dhee-mcp" }
  }
}
```

### Python SDK / CLI / Docker

```bash
dhee remember "User prefers Python"
dhee recall  "programming language"
dhee ingest CLAUDE.md AGENTS.md
dhee checkpoint "Fixed auth" --what-worked "checked logs"
```

### Provider options

```bash
pip install dhee[openai,mcp]    # cheapest embeddings
pip install dhee[nvidia,mcp]    # current SOTA stack
pip install dhee[gemini,mcp]
pip install dhee[ollama,mcp]    # local, no API costs
```

---

## Public vs Enterprise

| | **Public Dhee** (this repo, MIT) | **Dhee Enterprise** (private) |
|:--|:--|:--|
| Local memory + router | ✅ | ✅ |
| Self-tuning retrieval | ✅ | ✅ |
| Hermes → Claude Code/Codex learning exchange | ✅ | ✅ |
| Git-shared repo context | ✅ | ✅ |
| Claude Code / Codex / MCP | ✅ | ✅ |
| Org / team management | — | ✅ |
| Repo Brain code-intelligence | — | ✅ |
| Owner dashboard, billing, licensing | — | ✅ |
| Sentry-derived security telemetry | — | ✅ |

Public Dhee is the local collaboration layer — lightweight, trustworthy, and complete on its own. The commercial layer is closed-source and lives in `Sankhya-AI/dhee-enterprise`.

---

## FAQ

**What problem does Dhee solve?**
Large agent projects accumulate a fat `CLAUDE.md`, `AGENTS.md`, skills library, and tool output that get re-injected every turn. Dhee chunks, indexes, and decays that knowledge, and digests fat tool output at the source — so only the relevant ~300 tokens reach the model.

**How is Dhee different from Mem0, Letta, MemPalace, agentmemory?**
Dhee is built around four pieces most tools treat separately: reproducible LongMemEval results, a self-tuning retrieval/router policy, source-side digests for heavy `Read`/`Bash`/subagent output, and git-shared team context instead of a server.

**Does Dhee work with Claude Code, Cursor, Codex, Gemini CLI, Aider?**
Yes. Native Claude Code hooks, closest-native Codex config/AGENTS/session-stream sync, a Hermes MemoryProvider, an MCP server for every other host, plus a Python SDK and CLI. One install, every agent.

**Does Hermes make Claude Code and Codex smarter?**
Yes, through Dhee's learning exchange after promotion. Dhee can install as Hermes' memory provider, import Hermes memory/session/skill artifacts, and expose promoted learnings to Claude Code, Codex, and any MCP client as Learned Playbooks. Claude/Codex do not have to run Hermes to benefit.

**Does Claude Code or Codex evolve Hermes back?**
Yes, after promotion. Claude Code hooks, Codex session-stream sync, MCP memory tools, and learning submissions create Dhee learning candidates. Promoted personal/repo/workspace playbooks are retrieved by Hermes through the Dhee provider.

**How does the team-context sharing actually work?**
`dhee link /path/to/repo` writes a `.dhee/` directory inside your repo. Commit it. Teammates pull, install Dhee, and their agent surfaces the same shared decisions and conventions. Append-only with conflict detection — no overwrites, no server, no account.

**Is Dhee production-ready? What storage?**
SQLite by default. No Postgres, no Qdrant, no pgvector, no infra. The regression suite and reproducible benchmarks live in-tree. MIT, works offline with Ollama or online with OpenAI / NVIDIA NIM / Gemini.

**Where are the benchmarks and can I reproduce them?**
[`benchmarks/longmemeval/`](benchmarks/longmemeval/) — full command, per-question JSONL, `metrics.json`. Clone, run, recompute R@k. Any mismatch is an issue you can open.

---

## Contributing

```bash
git clone https://github.com/Sankhya-AI/Dhee.git
cd Dhee && ./scripts/bootstrap_dev_env.sh
source .venv-dhee/bin/activate
pytest
```

For the same full-suite path CI expects, including the local Rust acceleration
extension and async test plugin:

```bash
./scripts/verify_full_suite.sh
```

---

<p align="center">
  <b>Your fat skills stay fat. Your token bill stays thin. Promoted learnings travel with every agent.</b>
  <br><br>
  <a href="https://github.com/Sankhya-AI/Dhee">GitHub</a> ·
  <a href="https://pypi.org/project/dhee">PyPI</a> ·
  <a href="https://github.com/Sankhya-AI/Dhee/issues">Issues</a> ·
  <a href="https://sankhyaailabs.com">Sankhya AI</a>
</p>

<p align="center">MIT License — built by Sankhya AI Labs.</p>

<p align="center"><sub>
<b>Topics:</b> ai-agents · agent-memory · llm-memory · developer-brain · claude-code · claude-code-hooks · claudemd · agentsmd · mcp · mcp-server · model-context-protocol · context-router · context-engineering · context-compression · token-optimization · llm-tools · vector-memory · sqlite · longmemeval · retrieval-augmented-generation · rag · mem0-alternative · letta-alternative · mempalace-alternative · cursor · codex · gemini-cli · aider · cline · goose
</sub></p>
