A typed, evidence-backed atlas compiled from your vault. my-vault · 412 evergreen · 88 deep dives · 14 topics.
The Library is the reader-facing entrypoint. It surfaces compiled Topics, recently absorbed Concepts, and any Open Questions the pipeline has flagged.
ovp-build-crystals · ovp-moc --scan10-Knowledge/ · 20-Areas/ · knowledge.dbovp --full, or live when AutoPilot runsSources, Candidates, Canonical State, Projection, Access Surface, Governance — and the rules that hold them apart.
Topic · ai-research · compiled 2026-04-04 02:14
A topic crystal compiled from 4 deep-dives and 11 atomic concepts. The two architectures trade agency against verifiability — they are not interchangeable.
A Topic is a compiled cluster of related concepts (internal name: community_crystal). It is rebuildable from the vault — never edit it by hand; edit the underlying evergreen pages instead.
ovp-build-crystals --pack research-techcontradiction_crystal rowsRAG trades agency for verifiability: every answer is grounded in a retrieved span you can audit. Agent architectures invert the trade — they will act on partial information, replan, and reach conclusions no single retrieval would surface, but the audit trail is a transcript, not a citation.
The interesting systems are hybrids. Toolformer and ReAct embed retrieval as a tool the agent calls; Self-RAG lets the model decide when retrieval is needed at all. The unsolved question — see Open Questions below — is whether re-ranking helps or hurts.
RAG AGENT
─── ─────
query ─→ retrieve ─→ rerank ─→ query ─→ plan ─→ act ──┐
↓ ↑ │
LLM ─→ answer └────── observe
↑ │
citations transcript
Evergreen · 10-Knowledge/Evergreen/AI Agent.md · absorbed 2026-04-03 14:02
One-line definition: An AI Agent is a goal-driven system that interleaves reasoning, tool use, and observation to act autonomously on tasks the operator did not pre-decompose.
An AI Agent is built around a loop: plan → act → observe → revise. Unlike a function-call pipeline, the agent's plan is generated at runtime from a goal and a tool catalog, and may be revised after every observation.
Agents trade verifiability for agency. They reach conclusions no single retrieval would surface — at the cost of an audit trail that is a transcript rather than a citation. See the topic RAG vs. Agent Architectures for the full trade-off.
Three execution patterns dominate: ReAct (interleave thought and action tokens), Toolformer (self-supervised training over tool tokens), and Plan-and-Execute (decouple a long-horizon plan from per-step execution).
┌──────────┐ ┌──────────┐ ┌──────────┐
│ Plan │ → │ Act │ → │ Observe │
└──────────┘ └──────────┘ └──────────┘
↑ │
└──────── revise ──────────────┘
Operator workbench for pipeline runs, projection health, and governance audits. Reader vocabulary does not apply here — internal storage names are exposed.
The Ops shell is for the operator (you) — not the reader. It exposes raw pipeline state, lets you replay runs, and surfaces governance violations. It is intentionally separate from the Library shell.
knowledge.db · 60-Logs/pipeline.jsonl · 60-Logs/transactions//ops/queue and /ops/runsovp-rebuild-registry, ovp-knowledge-index, ovp-lint --fixovp --fullknowledge.db stale vs. registryarticles · 3 deep dives drafted from 50-Inbox/01-Raw/+3 candidatesquality · 2 of 3 cleared 6-dim thresholdavg 3.7fix_links · 1 deep dive deferred — broken link to [[Hybrid Retrieval]]deferredabsorb · 5 evergreen candidates promoted+5 evergreenregistry_sync · in progress…runningThe derived knowledge.db reports a row count for evergreen_pages that does not match the registry. Rebuild the projection, never edit it directly.
$ ovp-knowledge-index --rebuild --confirm