Metadata-Version: 2.4
Name: velocitybrain
Version: 0.1.0
Summary: Velocity Brain: agent-first memory and execution engine
Requires-Python: >=3.11
Description-Content-Type: text/markdown
Requires-Dist: fastapi==0.116.1
Requires-Dist: uvicorn[standard]==0.35.0
Requires-Dist: psycopg[binary]==3.2.9
Requires-Dist: pydantic==2.11.7
Requires-Dist: python-dotenv==1.1.1
Requires-Dist: apscheduler==3.11.0
Requires-Dist: pgvector==0.3.6

# Velocity Brain

<p align="center">
  <img src="docs/assets/velocity-brain-logo.svg" alt="Velocity Brain logo" width="760" />
</p>

<p align="center">
  CLI-native. API-capable. MCP-ready.
</p>

## What Is Velocity Brain

Velocity Brain is a local-first brain system for agents. It stores memory in Postgres, retrieves internal context, and runs deterministic agent workflows.

Core value:
- Brain-first retrieval before action
- Persistent memory and timeline model
- Agent loop runtime for planning and execution
- MCP tools for multiple MCP-compatible clients

## Quick Start (Local CLI)

### 1) Install

From PyPI (after you publish):

```powershell
python -m venv .venv
.\.venv\Scripts\Activate.ps1
python -m pip install --upgrade pip
python -m pip install velocitybrain
```

From local repo (dev mode):

```powershell
python -m venv .venv
.\.venv\Scripts\Activate.ps1
python -m pip install --upgrade pip
python -m pip install -r requirements.txt
python -m pip install -e .
```

### 2) Configure env

```powershell
Copy-Item .env.example .env
```

Default local DB URL in `.env.example` is:

```env
DATABASE_URL=postgresql://velocity:velocity@localhost:5432/velocitybrain
```

### 3) Start and initialize DB

```powershell
docker compose up db -d
docker compose exec -T db psql -U velocity -d velocitybrain -f /docker-entrypoint-initdb.d/01-schema.sql
```

### 4) Validate

```powershell
velocitybrain init
velocitybrain doctor
```

### 5) Use core flows

```powershell
velocitybrain ingest --source note --content "Met Jane Doe from Acme and discussed GTM"
velocitybrain query "What do I know about Jane Doe?"
velocitybrain run "Prepare me for meeting with Jane Doe tomorrow"
```

## How Answers Work Today

`velocitybrain query` and `velocitybrain run` do not call Claude/OpenAI/Gemini APIs by default.

They currently work as follows:
- `query`: keyword + hybrid retrieval from your local `entities` table, then lightweight synthesis from top internal match.
- `run`: intent detection + deterministic planning + simulated execution actions + local memory writeback.

So yes, if you connect Velocity Brain via MCP to Claude Code (or another client), that external client can use Velocity Brain tools. But Velocity Brain itself is local-first and does not require an LLM API to produce baseline outputs.

## CLI Commands

```powershell
velocitybrain about
velocitybrain init
velocitybrain doctor
velocitybrain skills --category query --limit 5
velocitybrain ingest --source note --content "..."
velocitybrain query "..."
velocitybrain run "..."
velocitybrain serve api --host 0.0.0.0 --port 8080 --reload
velocitybrain serve mcp
```

Output controls:

```powershell
velocitybrain --json query "What changed this week?"
velocitybrain --color about
velocitybrain --no-color about
```

Behavior notes:
- `--json` prints machine-readable JSON output.
- `--color` forces ANSI color output.
- `--no-color` disables ANSI styling.

## MCP Setup (Multi-Client)

Start MCP server:

```powershell
velocitybrain serve mcp
```

Generic stdio config (works in MCP clients that accept JSON `mcpServers`):

```json
{
  "mcpServers": {
    "velocitybrain": {
      "command": "velocitybrain",
      "args": ["serve", "mcp"]
    }
  }
}
```

If the client cannot resolve PATH, use absolute executable path:

```json
{
  "mcpServers": {
    "velocitybrain": {
      "command": "C:/Path/To/Python/Scripts/velocitybrain.exe",
      "args": ["serve", "mcp"]
    }
  }
}
```

Client notes:
- Claude Code CLI: supports MCP server add/list/get/remove commands. Example:

```powershell
claude mcp add velocitybrain -- velocitybrain serve mcp
```

- OpenAI Codex CLI: supports MCP server registration. Example:

```powershell
codex mcp add velocitybrain -- velocitybrain serve mcp
```

- Gemini CLI: supports `mcpServers` in Gemini settings JSON.
- Cline: supports MCP setup through its MCP settings/marketplace UI.

Available Velocity Brain MCP tools:
- `ingest_text`
- `query`
- `run_agent`
- `list_skills`
- `healthz`

## API Usage

Start API:

```powershell
velocitybrain serve api --host 0.0.0.0 --port 8080 --reload
```

Endpoints:
- Health: `http://localhost:8080/v1/healthz`
- Docs: `http://localhost:8080/docs`

## Publish to PyPI

### 1) Prepare release metadata

```powershell
python -m pip install --upgrade build twine
```

- Bump `version` in `pyproject.toml` for every release.
- Keep package name as `velocitybrain` in `pyproject.toml`.

### 2) Build clean artifacts

```powershell
Remove-Item -Recurse -Force dist,build,*.egg-info -ErrorAction SilentlyContinue
python -m build
```

### 3) Validate artifacts

```powershell
python -m twine check dist/*
```

### 4) Test on TestPyPI first (recommended)

```powershell
python -m twine upload --repository testpypi dist/*
python -m pip install --index-url https://test.pypi.org/simple/ --extra-index-url https://pypi.org/simple velocitybrain==0.1.0
velocitybrain about
```

### 5) Upload to PyPI

Use API token auth:

```powershell
setx TWINE_USERNAME "__token__"
setx TWINE_PASSWORD "pypi-<your-token>"
python -m twine upload dist/*
```

Then open a new terminal and run:

```powershell
python -m pip install --upgrade velocitybrain
velocitybrain about
```

Notes:
- If `velocitybrain` name is already taken on PyPI, publish with a new project name (for example `velocitybrain-ai`) and keep the same console script name if desired.
- You can switch to Trusted Publishing (GitHub Actions + PyPI trusted publisher) later to avoid long-lived API tokens.

## Testing

```powershell
python -m pytest -q
```

## Backward Compatibility

Legacy commands still work:
- `velocityx ...`
- `python velocityx.py ...`

## Documentation

- `docs/ARCHITECTURE.md`
- `docs/FOLDER_STRUCTURE.md`
- `docs/DB_SCHEMA.md`
- `docs/API_DESIGN.md`
- `docs/SKILL_SYSTEM.md`
- `docs/AGENT_LOOP.md`
- `docs/WORKFLOWS.md`

## Reference Links

- Claude Code MCP docs: https://docs.claude.com/en/docs/claude-code/mcp
- Gemini CLI MCP docs: https://google-gemini.github.io/gemini-cli/docs/tools/mcp-server.html
- OpenAI Codex MCP docs: https://developers.openai.com/codex/mcp
- Cline MCP docs: https://docs.cline.bot/mcp/mcp-overview

## License

MIT

## Runtime Identity

Velocity Brain now supports a runtime identity specification layer via `identity.spec.json` (configurable with `IDENTITY_SPEC_PATH`) that is loaded independently of `AGENTS.md`.

CLI:

```powershell
velocitybrain identity
```

API:

- `GET /v1/identity/spec`

## Org-mode Support

You can ingest Org files directly:

```powershell
velocitybrain ingest --source notes --org-file ./notes/daily.org
```

API:

- `POST /v1/ingest/org`

## Sync (Dry-run + Multi-repo)

By default, sync is dry-run only and does not mutate registry/state.

```powershell
velocitybrain sync --repo .
velocitybrain sync --repo C:/repo-a --repo C:/repo-b --apply
```

API:

- `POST /v1/sync/push`
- `POST /v1/sync/pull`
- `POST /v1/sync/full`

## Security and Policy

- Destructive MCP tools (`delete_page`, `put_page`, `sync_brain`) are policy-gated by default.
- Set `MCP_ALLOW_DESTRUCTIVE_TOOLS=true` and explicit `approve=true` in tool args to allow.
- File-based ingestion is workspace-bounded unless `ALLOW_UNSAFE_FILE_READS=true`.

## Evaluation and Governance

- Retrieval evaluation endpoint: `POST /v1/eval/query`
- Access token minting: `POST /v1/access/token`
- Encrypted legacy plan storage: `POST /v1/legacy/plan`, `GET /v1/legacy/plan/{owner}`

