Metadata-Version: 2.4
Name: memento-ai
Version: 0.2.0
Summary: Your codebase, remembered. AI-powered living memory for git projects.
Project-URL: Homepage, https://github.com/hernanqwz/memento
Project-URL: Repository, https://github.com/hernanqwz/memento
Project-URL: Issues, https://github.com/hernanqwz/memento/issues
Author-email: Hernan <hernan@eleata.io>
License-Expression: MIT
License-File: LICENSE
Keywords: ai,developer-tools,git,llm,memory
Classifier: Development Status :: 3 - Alpha
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Software Development :: Version Control :: Git
Requires-Python: >=3.9
Requires-Dist: click>=8.1
Requires-Dist: httpx>=0.25
Requires-Dist: pyyaml>=6.0
Requires-Dist: rich>=13.0
Requires-Dist: tomli>=2.0; python_version < '3.11'
Provides-Extra: dev
Requires-Dist: pytest-cov>=4.0; extra == 'dev'
Requires-Dist: pytest>=7.0; extra == 'dev'
Requires-Dist: ruff>=0.4; extra == 'dev'
Provides-Extra: local
Requires-Dist: llama-cpp-python>=0.3; extra == 'local'
Provides-Extra: mcp
Requires-Dist: mcp[cli]>=1.0; (python_version >= '3.10') and extra == 'mcp'
Description-Content-Type: text/markdown

# memento-ai

**Your codebase, remembered.**

memento watches your git commits and maintains a living memory of your project — readable by both humans and AI.

## How it works

```
git commit → memento analyzes the diff → updates markdown memory files
```

Memory files live in `.memento/memory/` — plain markdown, diffable, committable to your repo.

## Quick start

```bash
pip install memento-ai
cd your-project
memento init
memento process --all
```

That's it. Your project now has memory.

## Ask questions

```bash
memento ask "what does the auth system do?"
memento ask "what changed in the last few commits?"
```

## Features

- **Incremental** — only processes new commits
- **Offline-first** — works with Ollama or embedded local models (zero API cost)
- **Multi-provider** — OpenAI, Anthropic, Claude CLI, Ollama, local inference
- **Human-readable** — memory is plain markdown
- **Git-native** — auto-processes via post-commit hook
- **Private** — your code never leaves your machine (with local providers)

## LLM Providers

| Provider | Setup | Cost |
|----------|-------|------|
| Claude CLI | `pip install memento-ai` + Claude Code installed | Free (Max/Pro plan) |
| Ollama | `ollama pull qwen2.5-coder:7b` | Free |
| Local embedded | `pip install memento-ai[local]` | Free |
| OpenAI | Set `OPENAI_API_KEY` | Pay per token |
| Anthropic | Set `ANTHROPIC_API_KEY` | Pay per token |

### Using Ollama (recommended for privacy)

```bash
ollama pull qwen2.5-coder:7b
```

Edit `.memento/config.toml`:

```toml
[llm]
provider = "openai"
model = "qwen2.5-coder:7b"
base_url = "http://localhost:11434/v1"
```

### Using embedded local model (zero setup)

```bash
pip install memento-ai[local]
```

Edit `.memento/config.toml`:

```toml
[llm]
provider = "local"
# Auto-downloads Qwen2.5-Coder-3B (~2GB) on first run
```

### Using Claude CLI (free with Claude Max/Pro)

```toml
[llm]
provider = "claude-cli"
```

Requires [Claude Code](https://claude.ai/claude-code) installed and logged in.

## Commands

| Command | Description |
|---------|-------------|
| `memento init` | Initialize `.memento/`, install post-commit hook |
| `memento process` | Process new commits since last run |
| `memento process --all` | Process entire git history |
| `memento ask "question"` | Ask about your project |
| `memento status` | Show status: modules, commits processed |
| `memento forget` | Clear all memory, start fresh |
| `memento serve` | Start MCP server (stdio) |
| `memento export --format FMT` | Export memory (claude, cursor, copilot) |

## Configuration

`.memento/config.toml`:

```toml
[llm]
provider = "claude-cli"       # openai, anthropic, claude-cli, local
model = "gpt-4o-mini"         # model name (provider-specific)
base_url = "https://..."      # API base URL (openai provider)
temperature = 0.3
max_tokens = 2048

[processing]
chunk_size = 4000             # max diff size before chunking
summary_every = 10            # regenerate summary every N commits
ignore_patterns = ["*.lock", "dist/*"]

[memory]
dir = "memory"                # subdirectory for memory files
max_module_size = 5000        # max lines per module
```

## Memory structure

```
.memento/
├── config.toml          # your configuration
├── state.json           # processing state (gitignored)
└── memory/
    ├── SUMMARY.md       # auto-generated project overview
    ├── api-endpoints.md # module: API routes and patterns
    ├── auth-system.md   # module: authentication logic
    └── database.md      # module: schema and queries
```

Modules are created and maintained automatically based on what the LLM finds in your commits.

## MCP Server

memento exposes project memory via the [Model Context Protocol](https://modelcontextprotocol.io), so any MCP-compatible AI tool can access your project memory automatically.

```bash
pip install "memento-ai[mcp]"
```

### Claude Code

Add to `~/.claude/mcp.json`:

```json
{
  "mcpServers": {
    "memento": {
      "command": "memento-mcp"
    }
  }
}
```

### Cursor

Add via Settings → MCP Servers:

```json
{
  "mcpServers": {
    "memento": {
      "command": "memento-mcp"
    }
  }
}
```

### Available MCP tools

| Tool | Description |
|------|-------------|
| `memento_ask` | Ask a question about the project (uses LLM) |
| `memento_search` | Fast text search across memory (no LLM) |
| `memento_status` | Show modules, commits processed, last run |
| `memento_process` | Process new commits on-demand |

### Available MCP resources

| URI | Description |
|-----|-------------|
| `memento://summary` | Project summary |
| `memento://module/{name}` | Individual memory module |
| `memento://all` | All memory concatenated |

## Export

Export project memory to files that AI tools read automatically:

```bash
memento export --format claude   # → CLAUDE.md
memento export --format cursor   # → .cursor/rules/memento.mdc
memento export --format copilot  # → .github/copilot-instructions.md
```

## License

MIT
