Metadata-Version: 2.4
Name: work-memory-mcp
Version: 0.1.0
Summary: Local-first, cross-project development memory layer
License-Expression: MIT
Project-URL: Homepage, https://github.com/joe02740/work-memory
Project-URL: Repository, https://github.com/joe02740/work-memory
Project-URL: Issues, https://github.com/joe02740/work-memory/issues
Keywords: mcp,memory,vscode,sqlite,developer-tools,local-first
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Software Development :: Libraries
Classifier: Topic :: Utilities
Requires-Python: >=3.11
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: mcp>=1.0.0
Provides-Extra: dev
Requires-Dist: pytest>=8.0; extra == "dev"
Dynamic: license-file

# Work Memory

Work Memory is a local-first development memory layer for VS Code and MCP clients. It stores cross-repository work history in SQLite, keeps an append-only JSONL archive for auditability, and exposes that shared memory through a local MCP server.

It is designed for one-machine-first use, but the codebase is portable and the memory store can be backed up, restored, or pointed at a synced location.

## What It Does

- stores turns, notes, commands, files, entities, topics, and commit references
- imports local VS Code chat history from `workspaceStorage`
- serves that memory back into VS Code through MCP
- keeps retrieval deterministic with metadata-first filters before text search
- includes an RLM-inspired recursive recall mode with trajectory metadata and optional JSONL logs

## Quick Start

```bash
python -m venv .venv
.venv\Scripts\activate
pip install -e .[dev]
work-memory init
work-memory-install-vscode
```

After that, reload VS Code and trust the `work-memory` MCP server when prompted.

For a published install from PyPI, use the package name `work-memory-mcp`:

```bash
pip install work-memory-mcp
```

## VS Code Usage

If you use this from another VS Code window on the same machine, that other window can query the same history as long as it is configured to launch this MCP server. The memory data is already shared machine-wide by default under `%LOCALAPPDATA%/work-memory`, so the main requirement is MCP client configuration, not a separate database per window.

You do not need a global install just to use it across multiple VS Code windows on this computer. A workspace or user-level MCP config can point directly at this project's virtual environment:

```json
{
  "command": "C:/Project/work-memory/.venv/Scripts/python.exe",
  "args": ["-m", "work_memory.mcp_server", "--transport", "stdio"]
}
```

If you want a more stable command that does not depend on this repo path, install the package into a dedicated environment and point VS Code at `work-memory-mcp` instead. That is a convenience and portability improvement, not a functional requirement.

To install it for all VS Code windows in your user profile, run:

```bash
work-memory-install-vscode
```

That script:

- creates a dedicated runtime under `%LOCALAPPDATA%/work-memory/runtime`
- installs or upgrades this project into that runtime
- updates your user-level VS Code `mcp.json`
- enables `chat.mcp.autoStart`

This repository also includes a workspace-scoped example at `.vscode/mcp.json`.

If you use VS Code Settings Sync, enable MCP server synchronization and the user-level `mcp.json` entry will follow you across machines. You still need the runtime installed on each machine, but the server registration itself can sync.

## Architecture

The MVP keeps the boundary small:

- `storage`: initializes SQLite, writes immutable events, and appends raw JSONL records.
- `extractor`: derives tags, entities, and topics from text and metadata.
- `retrieval`: applies metadata filters first, then runs full-text search or topic matching.
- `service`: exposes a clean application interface that a future MCP wrapper can call.
- `cli`: provides local workflows without any cloud or LLM dependency.

Core concepts:

- `project`: repo-level identity (`repo_name`, `repo_path`)
- `session`: scoped work session (`session_id`, `branch`, `source`)
- `event`: immutable stored record (`turn`, `note`, `command`)
- `artifact`: file and commit references extracted from an event
- `tags`, `entities`, `topics`: normalized metadata for filtering and summarization

## Setup

```bash
python -m venv .venv
.venv\Scripts\activate
pip install -e .[dev]
```

If you are installing from PyPI instead of from source:

```bash
pip install work-memory-mcp
```

Initialize storage. By default this now uses a machine-level directory under `%LOCALAPPDATA%/work-memory` so multiple repositories can share one memory store. Use `--root` only when you want an isolated project-local store.

```bash
work-memory init
```

Override the storage root for testing or local-only use:

```bash
work-memory --root C:/Project/work-memory init
```

## Example Commands

Store a user turn:

```bash
work-memory store-turn \
  --repo-name repo-a \
  --repo-path C:/code/repo-a \
  --branch main \
  --session-id repo-a-2026-03-06 \
  --role user \
  --source cli \
  --text "How should we model auth and deployment memory?" \
  --tag architecture \
  --file src/auth.py
```

Store an assistant response:

```bash
work-memory store-turn \
  --repo-name repo-a \
  --repo-path C:/code/repo-a \
  --branch main \
  --session-id repo-a-2026-03-06 \
  --role assistant \
  --source cli \
  --text "Use metadata-first retrieval and keep raw history append-only." \
  --tag memory \
  --tag architecture
```

Store a command:

```bash
work-memory store-command \
  --repo-name repo-a \
  --repo-path C:/code/repo-a \
  --branch main \
  --session-id repo-a-2026-03-06 \
  --source terminal \
  --command "pytest -q" \
  --cwd C:/code/repo-a \
  --exit-code 0 \
  --file tests/test_auth.py
```

Store a note:

```bash
work-memory store-note \
  --repo-name repo-a \
  --repo-path C:/code/repo-a \
  --branch main \
  --session-id repo-a-2026-03-06 \
  --source manual \
  --title "handoff" \
  --text "Auth migration is blocked on Discord thread decisions." \
  --tag auth \
  --tag handoff
```

Project-scoped recall:

```bash
work-memory recall-project \
  --repo-name repo-a \
  --query auth \
  --tag architecture
```

RLM-inspired recursive recall:

```bash
work-memory recursive-recall \
  --query discord \
  --max-depth 2 \
  --branch-limit 3 \
  --log-dir ./logs
```

Cross-project recall with explicit filters:

```bash
work-memory recall-cross-project \
  --query discord \
  --topic auth \
  --entity Discord \
  --repo-name repo-a \
  --repo-name repo-b
```

Summarize a topic:

```bash
work-memory summarize-topic --topic memory --limit 10
```

Discover likely VS Code Copilot Chat sources on this machine:

```bash
work-memory discover-sources --limit 20
```

Import structured VS Code chat sessions as reconstructed user/assistant turns:

```bash
work-memory import-vscode-sessions --limit 200
```

Import discovered VS Code Copilot Chat resources into the shared store:

```bash
work-memory import-vscode-copilot --limit 200
```

Import an exported JSONL, JSON, or text transcript:

```bash
work-memory import-path \
  --path C:/exports/copilot-chat.jsonl \
  --repo-name imported-chat \
  --repo-path C:/imports/copilot-chat \
  --session-id import-2026-03-06 \
  --source exported-chat \
  --format auto
```

Import a VS Code chat session export or copied `chatSessions/*.jsonl` file with explicit structured reconstruction:

```bash
work-memory import-path \
  --path C:/exports/session.jsonl \
  --repo-name imported-chat \
  --repo-path C:/imports/session \
  --session-id import-2026-03-06 \
  --source exported-chat \
  --format vscode-chat-session
```

## Seed Data

```bash
python scripts/seed_sample.py
```

That script creates example records across multiple repositories so you can validate project-scoped and cross-project retrieval.

## Storage Layout

- `%LOCALAPPDATA%/work-memory/work_memory.db`: shared SQLite database by default
- `%LOCALAPPDATA%/work-memory/raw_events.jsonl`: append-only archive by default
- `data/work_memory.db`: used only when `--root` points at a project root
- `data/raw_events.jsonl`: used only when `--root` points at a project root

Raw history is never mutated after append. Search and recall operate from SQLite while the JSONL archive remains transparent for debugging or replay.

## Current Import Reality

The importer can now do two different things for VS Code data:

- `import-vscode-sessions` reconstructs structured chat sessions from `workspaceStorage/*/chatSessions/*.jsonl` into user and assistant turns.
- `import-vscode-copilot` preserves Copilot chat resource files as raw imported notes for auditability and fallback coverage.

Repo inference now prefers `workspace.json`, which means single-folder VS Code workspaces can usually map imported session history back to the real repo path instead of a workspace hash.

## MCP Server

Run the MCP server over stdio:

```bash
work-memory-mcp --transport stdio
```

Run it over streamable HTTP for shared local access:

```bash
work-memory-mcp --transport streamable-http --host 127.0.0.1 --port 8000 --json-response
```

The MCP layer exposes tools for:

- `memory_status`
- `search_memory`
- `recall_project_memory`
- `recall_cross_project_memory`
- `recursive_recall_memory`
- `summarize_memory_topic`
- `store_memory_note`
- `discover_memory_sources`
- `import_vscode_sessions`
- `import_vscode_copilot_resources`

It also exposes a `memory://status` resource and a `memory_query_plan` prompt.

For a local MCP client that supports stdio, point it at:

```json
{
  "command": "C:/Project/work-memory/.venv/Scripts/python.exe",
  "args": ["-m", "work_memory.mcp_server", "--transport", "stdio"]
}
```

## GitHub And Other Machines

The codebase is portable. It is not bespoke to only this machine, but the stored memory is local-first, which means the database and JSONL archive live on each machine unless you explicitly copy or sync them.

What GitHub gives you:

- backup and version history for the code
- a simple install source for other machines
- a shareable project other people can clone and run

What GitHub does not give you automatically:

- sync of `%LOCALAPPDATA%/work-memory/work_memory.db`
- sync of `%LOCALAPPDATA%/work-memory/raw_events.jsonl`

To use this on another computer:

```bash
git clone <your-repo-url>
cd work-memory
python -m venv .venv
.venv\Scripts\activate
pip install -e .[dev]
work-memory init
```

If you want to preserve existing history on a second machine, copy the storage directory or set `WORK_MEMORY_HOME` to a synced location before starting the server.

Examples:

```powershell
$env:WORK_MEMORY_HOME = "D:/synced/work-memory"
work-memory init
work-memory-mcp --transport stdio
```

Portable backup and restore helpers are included:

```bash
work-memory-backup --destination C:/backups
work-memory-restore --backup C:/backups/work-memory-backup-20260306-120000.zip
```

If you want true cross-machine continuity instead of occasional backups, set `WORK_MEMORY_HOME` to a synced folder on each machine before starting the MCP server.

```powershell
$env:WORK_MEMORY_HOME = "D:/synced/work-memory"
work-memory init
work-memory-install-vscode
```

## PyPI Publishing

This repository is configured for GitHub Actions Trusted Publishing to PyPI.

Workflow file:

- `.github/workflows/publish-pypi.yml`

Trusted Publisher values for PyPI:

- owner: `joe02740`
- repository: `work-memory`
- workflow name: `publish-pypi.yml`
- environment name: leave blank

Release flow:

1. Add the Trusted Publisher in the PyPI project settings for `work-memory-mcp`.
2. Push this repository state to GitHub.
3. Create a GitHub release such as `v0.1.0`.
4. GitHub Actions builds `dist/*` and publishes the package to PyPI.

## Recommended Distribution Path

The practical path is now:

1. Push the code to GitHub.
2. Keep the local memory store out of git.
3. Publish `work-memory-mcp` to PyPI from a GitHub release.
4. Install from PyPI or from source on each machine.
5. If you later want cross-machine history sync, add explicit export/import or a synced storage location.

That keeps the project local-first while still making it reusable by you and other people.
