jeevesagent.memory.vector

In-memory Memory with embedding-based semantic recall.

Episodes are stored in a dict keyed by id; on remember() we compute and attach an embedding (if the caller didn’t already supply one). recall() embeds the query and ranks all episodes by cosine similarity, with optional time-range filtering.

This backend doesn’t scale past a few thousand episodes — the recall is O(N) over every episode every call. Past that, switch to PostgresMemory (HNSW index) or ChromaMemory.

Classes

VectorMemory

Pure-Python embedding-backed Memory.

Module Contents

class jeevesagent.memory.vector.VectorMemory(*, embedder: jeevesagent.core.protocols.Embedder | None = None, max_episodes: int | None = None, consolidator: jeevesagent.memory.consolidator.Consolidator | None = None, fact_store: jeevesagent.memory.facts.FactStore | None = None)[source]

Pure-Python embedding-backed Memory.

async append_block(name: str, content: str) None[source]
async consolidate() None[source]

Process unconsolidated episodes through the configured Consolidator, appending facts to self.facts.

No-op when no consolidator is configured.

async recall(query: str, *, kind: str = 'episodic', limit: int = 5, time_range: tuple[datetime.datetime, datetime.datetime] | None = None, user_id: str | None = None) list[jeevesagent.core.types.Episode][source]
async recall_facts(query: str, *, limit: int = 5, valid_at: datetime.datetime | None = None, user_id: str | None = None) list[jeevesagent.core.types.Fact][source]
async remember(episode: jeevesagent.core.types.Episode) str[source]
async session_messages(session_id: str, *, user_id: str | None = None, limit: int = 20) list[jeevesagent.core.types.Message][source]
snapshot() dict[str, Any][source]
async update_block(name: str, content: str) None[source]
async working() list[jeevesagent.core.types.MemoryBlock][source]
property embedder: jeevesagent.core.protocols.Embedder
facts: jeevesagent.memory.facts.FactStore