Personal Memory Graph Server

YOUR MIND,
rendered as a
LIVING TOWN.

loci is a personal memory graph server. We process three layers — raw sources, an interpretation graph, and per-project views — served to any client with a uniform citation contract.

THE SILENT PIPELINE

Unlike standard databases, loci evolves as you work. The silent agentic pipeline maintains the interpretation layer.

  • 1. KICKOFF Kickoff reads your profile and a sample of the raws, generating 5–8 relationship observations — loci that record how the workspace content connects to the project's goals — directly into the live graph at confidence 0.5.
  • 2. DRAFT When you ask loci to draft something, it generates markdown with inline citations that map to specific nodes (PDFs, code files, notes).
  • 3. REFLECT Every draft triggers a silent, agentic reflection cycle that may create, reinforce, or soften interpretation nodes based on the task and citations.
LOKI-FRONTEND PREVIEW
Villagers represent nodes. Tinted by modality: papers blue, code green, notes beige.

VISUALISE THE GRAPH

The loki-frontend VSCode extension visualises the graph as a living town.

THE TRACE

A villager walking to the plaza, pausing 10s, then returning home is one trace round-trip.

DISTRICTS

Communities are represented as districts, mapped from the latest snapshot.

PEDESTALS

Right-click a villager to "Pin". A pinned node becomes a touchstone and rises onto a pedestal.

INITIALIZE YOUR INSTANCE

# 1. Install loci targeting Python 3.12+

git clone https://github.com/<you>/loci.git

cd loci

uv sync # creates .venv with runtime deps


# 2. Start the server

uv run loci server

→ worker thread started

→ Uvicorn running on http://127.0.0.1:7077