Metadata-Version: 2.4
Name: langgraph-postgres-memory
Version: 0.1.0
Summary: Production-ready PostgreSQL memory for LangGraph agents. Pool setup, checkpointer lifecycle, and common ops in one class.
Author: LangModule
License: MIT
License-File: LICENSE
Keywords: agent,checkpointer,langgraph,memory,postgres
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Software Development :: Libraries
Requires-Python: >=3.11
Requires-Dist: langchain-core>=1.3.2
Requires-Dist: langgraph-checkpoint-postgres>=3.0.5
Requires-Dist: psycopg-pool>=3.3.0
Requires-Dist: psycopg[binary]>=3.3.3
Requires-Dist: tenacity>=9.1.4
Description-Content-Type: text/markdown

# LangGraph Postgres Memory

Production-ready PostgreSQL short-term memory for LangGraph agents. Pool setup, checkpointer lifecycle, and common operations in one class.

> **Note:** This is a wrapper around [langgraph-checkpoint-postgres](https://github.com/langchain-ai/langgraph/tree/main/libs/checkpoint-postgres) — it does not reimplement checkpointing. It handles the boilerplate you'd otherwise copy-paste into every agent.

## Features

- **One-line setup** — connection pool, checkpointer, and lifecycle managed via async context manager
- **Production pool defaults** — TCP keepalives, configurable idle/lifetime/timeout, schema isolation
- **Retry with backoff** — transient Postgres errors retried automatically via tenacity
- **Thread cleanup** — single CTE deletes across all 3 checkpoint tables in one round-trip
- **Bulk cleanup** — delete threads older than N days using UUID v6 timestamp comparison
- **Health primitives** — `ping()` and `pool_stats()` for application health endpoints
- **Pydantic config** — validated settings, pass however you load them (YAML, env vars, hardcoded)

## Installation

```bash
# pip
pip install langgraph-postgres-memory

# uv
uv add langgraph-postgres-memory
```

## Requirements

- Python >= 3.11
- PostgreSQL (tested with 16)

## Usage

```python
from langchain_core.messages import HumanMessage
from langgraph.graph import END, START, MessagesState, StateGraph
from langgraph_postgres_memory import PostgresMemoryConfig, PostgresShortTerm

# Define your graph
builder = StateGraph(MessagesState)
builder.add_node("echo", lambda state: {"messages": state["messages"]})
builder.add_edge(START, "echo")
builder.add_edge("echo", END)

# Configure memory
config = PostgresMemoryConfig(
    user="myuser",
    password="mypass",
    host="localhost",
    database="mydb",
    schema_name="agent_schema",  # default: "public"
)

async with PostgresShortTerm(config) as memory:
    # Compile your graph with the checkpointer
    graph = builder.compile(checkpointer=memory.checkpointer)

    # Invoke as usual
    result = await graph.ainvoke(
        {"messages": [HumanMessage(content="hello")]},
        {"configurable": {"thread_id": "thread-123"}},
    )

    # Read messages back
    messages = await memory.get_messages("thread-123")

    # Delete a thread
    await memory.delete_thread("thread-123")

    # Bulk cleanup
    await memory.delete_threads_older_than(days=30)

    # Health check
    alive = await memory.ping()
    stats = memory.pool_stats()
```

### Without this library

```python
# ~40 lines you copy-paste into every agent
conn_str = f"postgresql://{user}:{quote_plus(password)}@{host}:{port}/{db}"
conn_str += "?keepalives=1&keepalives_idle=30&..."
pool = AsyncConnectionPool(
    conninfo=conn_str, min_size=2, max_size=10,
    kwargs={"autocommit": True, "row_factory": dict_row},
    configure=..., check=...,
)
await pool.open()
checkpointer = AsyncPostgresSaver(pool)
await checkpointer.setup()
# ... try/finally to close pool
# ... raw SQL to delete threads across 3 tables
# ... dig into checkpoint JSONB to extract messages
```

## Configuration

All pool and retry settings have sensible defaults. Override what you need:

```python
config = PostgresMemoryConfig(
    user="myuser",
    password="mypass",
    database="mydb",

    # Pool tuning (defaults shown)
    pool_min_size=2,
    pool_max_size=20,
    pool_max_idle=300,       # seconds — tune down to ~30 for serverless (Neon, Supabase)
    pool_max_lifetime=1800,  # seconds — tune down to ~180 for serverless
    pool_timeout=30,         # seconds — acquisition timeout

    # Retry tuning
    retry_max_attempts=3,
    retry_max_wait=10,       # backoff cap in seconds
)
```

## API Reference

| Method | Description |
|--------|-------------|
| `PostgresShortTerm(config)` | Constructor, takes `PostgresMemoryConfig` |
| `async with PostgresShortTerm(config)` | Opens pool, initializes checkpointer, closes on exit |
| `.checkpointer` | `AsyncPostgresSaver` instance for `builder.compile(checkpointer=...)` |
| `await .get_messages(thread_id)` | Get messages from latest checkpoint |
| `await .delete_thread(thread_id)` | Delete all checkpoints, blobs, and writes for a thread |
| `await .delete_threads_older_than(days)` | Bulk delete threads older than N days |
| `await .ping()` | Returns `True` if database is reachable |
| `.pool_stats()` | Pool size, available connections, waiting requests |

## Testing

```bash
# Unit tests only (no database needed)
make test-unit

# Full test suite (starts Postgres via Docker, runs all tests, stops Postgres)
make test-all

# Or manually
docker compose -f tests/docker-compose.yml up -d --wait
uv run pytest --run-integration -v
docker compose -f tests/docker-compose.yml down
```

## Acknowledgments

This project wraps [langgraph-checkpoint-postgres](https://github.com/langchain-ai/langgraph/tree/main/libs/checkpoint-postgres) from the LangChain team. The checkpointing engine, serialization, and schema management are entirely theirs — this library handles pool lifecycle, retry, and convenience operations on top.

## License

MIT
