jeevesagent.memory.postgres
===========================

.. py:module:: jeevesagent.memory.postgres

.. autoapi-nested-parse::

   Postgres + pgvector :class:`Memory` backend.

   Schema (created by :meth:`init_schema`):

   * ``memory_blocks(namespace, name, content, pinned_order, updated_at)``
   * ``episodes(id, namespace, session_id, occurred_at, input, output,
     embedding vector(N))`` with HNSW cosine index on ``embedding``

   The ``vector(N)`` column dimension is fixed at table-creation time and
   must match the configured embedder's ``dimensions``. Switching
   embedders later requires migrating the table.

   Both ``asyncpg`` and ``pgvector`` are imported lazily inside
   :meth:`connect` / :meth:`init_schema` so the module loads in
   environments without those extras installed; the import only fires
   when actually opening a connection.



Attributes
----------

.. autoapisummary::

   jeevesagent.memory.postgres.DEFAULT_NAMESPACE


Classes
-------

.. autoapisummary::

   jeevesagent.memory.postgres.PostgresMemory


Module Contents
---------------

.. py:class:: PostgresMemory(pool: Any, *, embedder: jeevesagent.core.protocols.Embedder | None = None, namespace: str = DEFAULT_NAMESPACE, fact_store: Any | None = None)

   Postgres-backed :class:`Memory`.

   ``pool`` is an ``asyncpg.Pool`` (or anything with the same API).
   Tests can pass a fake pool whose ``acquire()`` returns a fake
   connection.


   .. py:method:: aclose() -> None
      :async:



   .. py:method:: append_block(name: str, content: str) -> None
      :async:



   .. py:method:: connect(dsn: str, *, embedder: jeevesagent.core.protocols.Embedder | None = None, namespace: str = DEFAULT_NAMESPACE, min_size: int = 1, max_size: int = 10, with_facts: bool = False) -> PostgresMemory
      :classmethod:

      :async:


      Open an asyncpg pool and register the pgvector codec.

      When ``with_facts=True`` a :class:`PostgresFactStore` rooted at
      the same pool is attached as ``self.facts`` so the agent loop's
      memory.facts integration just works.



   .. py:method:: consolidate() -> None
      :async:



   .. py:method:: init_schema() -> None
      :async:


      Apply :meth:`schema_sql` against the connected pool.

      When a :class:`PostgresFactStore` is attached as ``self.facts``,
      its schema is initialised in the same call.



   .. py:method:: recall(query: str, *, kind: str = 'episodic', limit: int = 5, time_range: tuple[datetime.datetime, datetime.datetime] | None = None, user_id: str | None = None) -> list[jeevesagent.core.types.Episode]
      :async:



   .. py:method:: recall_facts(query: str, *, limit: int = 5, valid_at: datetime.datetime | None = None, user_id: str | None = None) -> list[jeevesagent.core.types.Fact]
      :async:



   .. py:method:: remember(episode: jeevesagent.core.types.Episode) -> str
      :async:



   .. py:method:: schema_sql() -> list[str]

      Return the DDL needed to bootstrap this backend's schema.

      Exposed so tests can assert on the SQL without running it; also
      usable from migration scripts that want to apply the schema in
      their own transaction.



   .. py:method:: session_messages(session_id: str, *, user_id: str | None = None, limit: int = 20) -> list[jeevesagent.core.types.Message]
      :async:



   .. py:method:: update_block(name: str, content: str) -> None
      :async:



   .. py:method:: working() -> list[jeevesagent.core.types.MemoryBlock]
      :async:



   .. py:property:: embedding_dimensions
      :type: int



   .. py:attribute:: facts
      :type:  Any | None
      :value: None



   .. py:property:: namespace
      :type: str



.. py:data:: DEFAULT_NAMESPACE
   :value: 'default'


