UI KIT · 4 SCREENS Atlas (3D) →

Library

A typed, evidence-backed atlas compiled from your vault. my-vault · 412 evergreen · 88 deep dives · 14 topics.

What is this page?

The Library is the reader-facing entrypoint. It surfaces compiled Topics, recently absorbed Concepts, and any Open Questions the pipeline has flagged.

Compiled by
ovp-build-crystals · ovp-moc --scan
Reads from
10-Knowledge/ · 20-Areas/ · knowledge.db
Refresh
Daily on ovp --full, or live when AutoPilot runs
Vocabulary
Topic = community crystal · Open Question = contradiction crystal (BL-051)
Evergreen concepts
412
+8 this week
Deep dives
88
avg quality 3.4 / 5
Topics
14
last compiled 02:14 today
Open questions
3
2 in AI Research, 1 in Tools

Featured Topics

See all 14 featured topics

RAG vs. Agent Architectures

cluster ai-research 12 evidence

A topic crystal compiled from 4 deep-dives and 11 atomic concepts. The two architectures are not interchangeable: RAG trades agency for verifiability, AI Agent trades verifiability for agency.

2 open questions

Local-first Knowledge Tooling

cluster tools 9 evidence

Why Obsidian, SQLite and Markdown outlast SaaS knowledge bases. Compiled from 3 deep-dives.

Six-term Architecture Vocabulary

cluster programming 7 evidence

Sources, Candidates, Canonical State, Projection, Access Surface, Governance — and the rules that hold them apart.

Recently Absorbed

Topic · ai-research · compiled 2026-04-04 02:14

RAG vs. Agent Architectures

A topic crystal compiled from 4 deep-dives and 11 atomic concepts. The two architectures trade agency against verifiability — they are not interchangeable.

cluster 12 evidence pages 2 open questions last refresh 02:14
What is a Topic?

A Topic is a compiled cluster of related concepts (internal name: community_crystal). It is rebuildable from the vault — never edit it by hand; edit the underlying evergreen pages instead.

Compiled by
ovp-build-crystals --pack research-tech
Sources
4 deep dives · 11 evergreen concepts
Open questions
Surfaced from contradiction_crystal rows

Compiled summary

RAG trades agency for verifiability: every answer is grounded in a retrieved span you can audit. Agent architectures invert the trade — they will act on partial information, replan, and reach conclusions no single retrieval would surface, but the audit trail is a transcript, not a citation.

The interesting systems are hybrids. Toolformer and ReAct embed retrieval as a tool the agent calls; Self-RAG lets the model decide when retrieval is needed at all. The unsolved question — see Open Questions below — is whether re-ranking helps or hurts.

Atomic concepts

  • RAG — retrieval-augmented generation
  • AI Agent — autonomous task-decomposing system
  • ReAct — reasoning + acting interleaved
  • Toolformer — self-supervised tool-use training
  • Self-RAG — model-controlled retrieval
  • Re-ranking — cross-encoder rescoring
  • Hybrid Retrieval — missing target, search fallback

Architecture sketch

  RAG                                AGENT
  ───                                ─────
  query ─→ retrieve ─→ rerank ─→     query ─→ plan ─→ act ──┐
                       ↓                          ↑         │
                     LLM ─→ answer                └────── observe
                       ↑                                    │
                    citations                            transcript
              

Open questions

  1. Does re-ranking help? Three deep-dives say yes (recall@10 improves 12–18%); two say latency cost dominates above 50ms p95. Read the conflict ↗
  2. Is the agent's transcript an audit trail? The architecture doc says no — transcripts aren't citations. Two papers in the cluster argue otherwise. Read the conflict ↗

Evergreen · 10-Knowledge/Evergreen/AI Agent.md · absorbed 2026-04-03 14:02

AI Agent

One-line definition: An AI Agent is a goal-driven system that interleaves reasoning, tool use, and observation to act autonomously on tasks the operator did not pre-decompose.

📝 Detailed explanation

What is it?

An AI Agent is built around a loop: plan → act → observe → revise. Unlike a function-call pipeline, the agent's plan is generated at runtime from a goal and a tool catalog, and may be revised after every observation.

Why does it matter?

Agents trade verifiability for agency. They reach conclusions no single retrieval would surface — at the cost of an audit trail that is a transcript rather than a citation. See the topic RAG vs. Agent Architectures for the full trade-off.

How does it work?

Three execution patterns dominate: ReAct (interleave thought and action tokens), Toolformer (self-supervised training over tool tokens), and Plan-and-Execute (decouple a long-horizon plan from per-step execution).

🏗️ Architecture

  ┌──────────┐    ┌──────────┐    ┌──────────┐
  │  Plan    │ → │   Act     │ → │  Observe  │
  └──────────┘    └──────────┘    └──────────┘
        ↑                              │
        └──────── revise ──────────────┘
              

💡 Action items

  • Short term — read Yao et al. 2022 (ReAct) and run the canonical hotpot QA replication.
  • Long term — evaluate LangGraph vs. hand-rolled state machines for production agents.

🔗 Related concepts

Maintenance · Ops

Operator workbench for pipeline runs, projection health, and governance audits. Reader vocabulary does not apply here — internal storage names are exposed.

What is the Ops shell?

The Ops shell is for the operator (you) — not the reader. It exposes raw pipeline state, lets you replay runs, and surfaces governance violations. It is intentionally separate from the Library shell.

Reads from
knowledge.db · 60-Logs/pipeline.jsonl · 60-Logs/transactions/
Refresh
5s meta-refresh on /ops/queue and /ops/runs
Boundary
Read-only. Mutations happen via CLI: ovp-rebuild-registry, ovp-knowledge-index, ovp-lint --fix
Active run
running ovp --full
step 6 of 9 · absorb · ETA 02:18
Inbox queue
7
3 articles · 2 papers · 2 clippings
Projection drift
2
knowledge.db stale vs. registry
Governance
0
no violations in last 24h

Recent runs

last 10 · all runs ↗
idcmdstatusdurationstarted
run-2426ovp --fullrunning— · started 02:11
run-2425ovp-build-crystalsready8.2s · 02:09
run-2424ovp-knowledge-indexready12.6s · 02:06
run-2423ovp --step articlesstale2 retries · 01:38
run-2422ovp-absorb --recent 7ready22.1s · 01:14
run-2421ovp-lint --checkready1.4s · 00:52

Pipeline timeline · last 30 minutes

auto-refresh 5s
02:11:04articles · 3 deep dives drafted from 50-Inbox/01-Raw/+3 candidates
02:13:22quality · 2 of 3 cleared 6-dim thresholdavg 3.7
02:14:01fix_links · 1 deep dive deferred — broken link to [[Hybrid Retrieval]]deferred
02:14:48absorb · 5 evergreen candidates promoted+5 evergreen
02:15:30registry_sync · in progress…running
2 projection drift events

The derived knowledge.db reports a row count for evergreen_pages that does not match the registry. Rebuild the projection, never edit it directly.

$ ovp-knowledge-index --rebuild --confirm