Metadata-Version: 2.4
Name: membrain-langchain
Version: 0.1.0
Summary: LangChain integration for Membrain — persistent semantic memory for AI agents
Project-URL: Homepage, https://mem-brain.io
Project-URL: Documentation, https://mem-brain.io/docs
Author-email: Membrain <info@alphanimble.com>
License: MIT
License-File: LICENSE
Keywords: agents,ai,langchain,llm,membrain,memory,persistent-memory,rag,semantic-memory,vector-search
Classifier: Development Status :: 4 - Beta
Classifier: Framework :: AsyncIO
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Typing :: Typed
Requires-Python: >=3.10
Requires-Dist: httpx>=0.27.0
Requires-Dist: langchain-core>=0.3.0
Requires-Dist: pydantic>=2.0
Provides-Extra: anthropic
Requires-Dist: langchain-anthropic>=0.3.0; extra == 'anthropic'
Provides-Extra: fastapi
Requires-Dist: fastapi>=0.115.0; extra == 'fastapi'
Requires-Dist: uvicorn[standard]>=0.30.0; extra == 'fastapi'
Provides-Extra: openai
Requires-Dist: langchain-openai>=0.2.0; extra == 'openai'
Description-Content-Type: text/markdown

# membrain-langchain

> LangChain integration for [Membrain](https://mem-brain.io) — give your AI agents persistent, semantic long-term memory.

[![PyPI version](https://img.shields.io/pypi/v/membrain-langchain.svg)](https://pypi.org/project/membrain-langchain/)
[![Python](https://img.shields.io/pypi/pyversions/membrain-langchain.svg)](https://pypi.org/project/membrain-langchain/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

Membrain is a semantic memory system backed by PostgreSQL and pgvector. It stores memories as atomic notes, links related ones automatically via an LLM-powered Guardian, and supports graph-native retrieval — letting your agents recall facts, relationships, and context across sessions.

This package wires Membrain into LangChain with proper primitives: a `BaseChatMessageHistory`, a `BaseRetriever`, agent tools, and a high-level `MembrainMemory` class.

---

## Install

```bash
pip install membrain-langchain

# with OpenAI support
pip install "membrain-langchain[openai]"

# with uv
uv add membrain-langchain
uv add "membrain-langchain[openai]"
```

---

## Quick Start

Get a Membrain API key at [mem-brain.io](https://mem-brain.io), then:

```python
import asyncio
from membrain_langchain import MembrainClient, MembrainMemory
from langchain_openai import ChatOpenAI
from langchain_core.messages import SystemMessage, HumanMessage

client = MembrainClient(api_key="mb_live_...")
memory = MembrainMemory(client=client, user_id="user-123")
llm = ChatOpenAI(model="gpt-4o")

async def chat(user_message: str) -> str:
    # Fetch relevant memories and inject into the system prompt
    system_prompt = await memory.build_system_prompt(
        query=user_message,
        base_prompt="You are a helpful personal assistant.",
    )
    response = await llm.ainvoke([
        SystemMessage(content=system_prompt),
        HumanMessage(content=user_message),
    ])
    # Store the interaction for future recall
    await memory.store_interaction(user_message, response.content)
    return response.content

asyncio.run(chat("I love mani dum biryani"))
# Next message: the assistant already knows you love biryani
```

---

## What is Membrain?

| Feature | Membrain | Typical vector store |
|---|---|---|
| Memory model | Atomic notes + semantic graph | Document/chunk store |
| Relationship search | Links are first-class searchable entities | No |
| Smart merge | Guardian auto-decides update vs create | Manual |
| Interpreted search | LLM summary of retrieved memories | No |
| Graph operations | Path-finding, hubs, neighborhood | No |

---

## Classes

### `MembrainClient`

The async HTTP client. All other classes use this internally.

```python
from membrain_langchain import MembrainClient

client = MembrainClient(
    api_key="mb_live_...",
    base_url="https://mem-brain-api-cutover-v4-production.up.railway.app",  # default
    poll_interval=0.5,   # seconds between ingest job polls
    poll_timeout=30.0,   # max seconds to wait when wait=True
)

# Store a memory (waits for Guardian to complete linking)
result = await client.add_memory(
    content="User prefers dark roast coffee",
    scope=["user:user-123"],
    category="preference",
)

# Fire-and-forget (returns immediately after job is submitted)
await client.add_memory(content="...", wait=False)

# Semantic search
results = await client.search(query="coffee preferences", k=5, scope=["user:user-123"])

# Graph operations
await client.graph_neighborhood(memory_id="abc-123", hops=2)
await client.graph_hubs(limit=10)
await client.graph_path(from_id="abc", to_id="xyz")

# Stats
await client.get_stats()
```

---

### `MembrainMemory`

High-level drop-in. Handles context retrieval and interaction storage automatically.

```python
from membrain_langchain import MembrainMemory

memory = MembrainMemory(
    client=client,
    user_id="user-123",
    context_k=5,   # number of memories to inject per turn
)

# Get relevant memories as a formatted string
context = await memory.get_context("What coffee should I try?")

# Get a full system prompt with memory context appended
system_prompt = await memory.build_system_prompt(
    query="What coffee should I try?",
    base_prompt="You are a helpful assistant.",
)
# Returns: "You are a helpful assistant.\n\n## What you know about this user:\n- ..."

# Store a conversation turn (fire-and-forget)
await memory.store_interaction(user_msg="I love biryani", ai_msg="Noted!")

# Store an arbitrary fact (waits for confirmation)
await memory.add(content="User is vegetarian", category="dietary")
```

---

### `MembrainRetriever`

A proper `BaseRetriever` subclass. Plugs into any RAG chain.

```python
from membrain_langchain import MembrainRetriever
from langchain_core.runnables import RunnablePassthrough
from langchain_core.output_parsers import StrOutputParser

retriever = MembrainRetriever(
    client=client,
    user_id="user-123",
    k=5,
    scope=None,           # defaults to ["user:{user_id}"]
    category=None,        # optional category filter
    include_related=True, # include linked neighbor memories
)

# Use in a RAG chain
chain = (
    {"context": retriever | format_docs, "question": RunnablePassthrough()}
    | prompt
    | llm
    | StrOutputParser()
)
answer = await chain.ainvoke("What does the user like to eat?")
```

Each result comes back as a `Document` with metadata:
```python
Document(
    page_content="User loves mani dum biryani",
    metadata={
        "memory_id": "abc-123",
        "scope": ["user:user-123"],
        "semantic_score": 0.94,
        "related_memories": ["User is vegetarian"],
        "source": "membrain",
    }
)
```

> **Note:** `MembrainRetriever` is async-only. Use `chain.ainvoke()` or `await retriever.aget_relevant_documents(query)`.

---

### `MembrainChatMessageHistory`

A proper `BaseChatMessageHistory` subclass for use with `RunnableWithMessageHistory`. Persists conversation turns in Membrain scoped by user and session.

```python
from membrain_langchain import MembrainChatMessageHistory
from langchain_core.runnables.history import RunnableWithMessageHistory

def get_history(session_id: str) -> MembrainChatMessageHistory:
    return MembrainChatMessageHistory(
        client=client,
        user_id="user-123",
        session_id=session_id,
        max_history=20,
    )

chain_with_history = RunnableWithMessageHistory(
    chain,
    get_history,
    input_messages_key="input",
    history_messages_key="history",
)

await chain_with_history.ainvoke(
    {"input": "What did I say earlier?"},
    config={"configurable": {"session_id": "session-abc"}},
)
```

Scope format: `["user:{user_id}", "session:{session_id}"]`

- Search with `scope=["user:user-123"]` — all sessions for a user (long-term memory)
- Search with both scopes — isolated to one conversation

> **Note:** Async-only. `RunnableWithMessageHistory` handles this automatically via `ainvoke`.

---

### `get_membrain_tools()`

Returns LangChain `StructuredTool` objects for use in ReAct / tool-calling agents.

```python
from membrain_langchain import get_membrain_tools
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI

tools = get_membrain_tools(
    client=client,
    user_id="user-123",
    include_graph_tools=False,  # set True to add neighborhood + hubs tools
)

agent = create_react_agent(ChatOpenAI(model="gpt-4o"), tools)
result = await agent.ainvoke({
    "messages": [{"role": "user", "content": "What do you remember about me?"}]
})
```

**Available tools:**

| Tool | Description |
|---|---|
| `membrain_search` | Semantic memory search |
| `membrain_add` | Store a new memory |
| `membrain_stats` | Memory system statistics |
| `membrain_neighborhood` | Expand graph around a memory node *(opt-in)* |
| `membrain_hubs` | Most-connected memory nodes *(opt-in)* |

---

## Patterns

### Pattern A — `MembrainMemory` (automatic, always-on)

Retrieval and storage happen on every turn. The LLM receives memory as pre-injected context and never calls a tool.

```
User message → search Membrain → inject context → LLM → store interaction
```

Best for: simple chat apps, customer support bots, personal assistants.

### Pattern B — Agent tools (LLM-driven)

The agent decides when to search, what to search for, and when to store new facts.

```
User message → Agent reasons → calls membrain_search → reasons → responds → optionally calls membrain_add
```

Best for: ReAct agents, research assistants, autonomous agents.

### Pattern C — RAG pipeline

`MembrainRetriever` slots into a standard retrieval-augmented generation chain.

```
Question → retrieve relevant memories as Documents → LLM answers with context
```

Best for: knowledge-base Q&A, document assistants, domain-specific bots.

---

## Async-only

All Membrain API calls are async. This maps cleanly to modern LangChain (which uses `ainvoke`, `astream`, etc.) and FastAPI.

Sync stubs (`messages` property, `add_message`, `clear`, `get_relevant_documents`) raise `NotImplementedError` with a clear message pointing to the async equivalent.

---

## License

MIT

---

## Links

- [Membrain](https://mem-brain.io)
- [Membrain Docs](https://mem-brain.io/docs)
- [Get an API key](https://mem-brain.io)
- [LangChain Docs](https://python.langchain.com)
