Metadata-Version: 2.4
Name: langchain-memstate
Version: 0.1.0
Summary: LangChain integration for Memstate AI — persistent, structured, versioned memory for AI agents
Project-URL: Homepage, https://memstate.ai
Project-URL: Documentation, https://memstate.ai/docs
Project-URL: Repository, https://github.com/memstate-ai/memstate-mcp
Project-URL: Bug Tracker, https://github.com/memstate-ai/memstate-mcp/issues
Project-URL: Changelog, https://github.com/memstate-ai/memstate-mcp/releases
Author-email: Memstate AI <hello@memstate.ai>
License: MIT
Keywords: agent-memory,agents,ai,langchain,langmem,llm,mem0,memory,memstate,persistent-memory,rag
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.9
Requires-Dist: langchain-core>=0.3.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: requests>=2.28.0
Provides-Extra: dev
Requires-Dist: langchain-openai>=0.2.0; extra == 'dev'
Requires-Dist: langchain>=0.3.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.21; extra == 'dev'
Requires-Dist: pytest-mock>=3.10; extra == 'dev'
Requires-Dist: pytest>=7.0; extra == 'dev'
Requires-Dist: python-dotenv>=1.0; extra == 'dev'
Requires-Dist: responses>=0.23; extra == 'dev'
Provides-Extra: langchain
Requires-Dist: langchain-openai>=0.2.0; extra == 'langchain'
Requires-Dist: langchain>=0.3.0; extra == 'langchain'
Description-Content-Type: text/markdown

# memstate-langchain

> Persistent, structured, versioned memory for LangChain agents — powered by [Memstate AI](https://memstate.ai).

[![PyPI version](https://badge.fury.io/py/memstate-langchain.svg)](https://badge.fury.io/py/memstate-langchain)
[![Python 3.9+](https://img.shields.io/badge/python-3.9+-blue.svg)](https://www.python.org/downloads/)
[![LangChain](https://img.shields.io/badge/langchain-0.3%2B-green.svg)](https://python.langchain.com)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

Memstate AI gives your LangChain agents **persistent memory** that survives across sessions, with automatic conflict detection, version history, and semantic search. It is a direct alternative to [Mem0](https://mem0.ai) with superior structured storage and a transparent keypath hierarchy.

---

## Why Memstate vs Mem0?

| Feature | Memstate AI | Mem0 |
|---|---|---|
| **Keypath hierarchy** | Structured dot-paths (`auth.oauth.provider`) | Flat key-value |
| **Version history** | Full audit trail with time-travel | Limited |
| **Conflict detection** | Automatic cascade invalidation | Manual |
| **Server-side extraction** | LLM extracts keypaths from raw text | Manual structuring |
| **Semantic search** | Vector search with score | Vector search |
| **Open source** | MCP server open source | Partially open |
| **Self-hostable** | Yes | Yes |
| **LangChain support** | This package | `mem0ai` package |

---

## Installation

```bash
pip install memstate-langchain
```

With the full LangChain stack:

```bash
pip install "memstate-langchain[langchain]"
```

Get your free API key at [memstate.ai/dashboard](https://memstate.ai/dashboard).

---

## Quick Start

### Option 1: Semantic Memory for LCEL Chains

`MemstateMemory` wraps any LCEL chain to automatically retrieve relevant memories before
each call and persist the conversation turn afterward:

```python
import os
from memstate_langchain import MemstateMemory
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate

llm = ChatOpenAI(model="gpt-4o-mini")
prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant.\n\n{relevant_memories}"),
    ("human", "{input}"),
])

memory = MemstateMemory(
    api_key=os.environ["MEMSTATE_API_KEY"],
    project_id="my-assistant",
    session_id="user-alice",   # optional: scope memories per user/session
)

# Wrap any LCEL chain
chain = memory.wrap(prompt | llm)

# Session 1 — the agent learns about the user
chain.invoke({"input": "My name is Alice. I prefer Python and I'm working on a FastAPI project."})
chain.invoke({"input": "I decided to use PostgreSQL for the database."})

# Session 2 (new process, same project_id) — Alice's preferences are retrieved automatically
response = chain.invoke({"input": "What database am I using?"})
print(response.content)  # "Based on your previous sessions, you decided to use PostgreSQL."
```

### Option 2: Persistent Chat Message History

Use `MemstateChatMessageHistory` with `RunnableWithMessageHistory` for the modern LangChain Expression Language (LCEL) pattern:

```python
from memstate_langchain import MemstateChatMessageHistory
from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder
from langchain_core.runnables.history import RunnableWithMessageHistory

llm = ChatOpenAI(model="gpt-4o")

prompt = ChatPromptTemplate.from_messages([
    ("system", "You are a helpful assistant."),
    MessagesPlaceholder(variable_name="history"),
    ("human", "{input}"),
])

chain = prompt | llm

chain_with_history = RunnableWithMessageHistory(
    chain,
    lambda session_id: MemstateChatMessageHistory(
        api_key="mst_your_key_here",
        project_id="my-assistant",
        session_id=session_id,
    ),
    input_messages_key="input",
    history_messages_key="history",
)

# Each call with the same session_id resumes the conversation
response = chain_with_history.invoke(
    {"input": "What is the capital of France?"},
    config={"configurable": {"session_id": "user-alice"}},
)
```

### Option 3: Agent Tools (ReAct / Function-Calling)

Give your agent the full Memstate toolset — identical to the [Memstate MCP server](https://github.com/memstate-ai/memstate-mcp):

```python
from memstate_langchain import create_memstate_tools
from langchain_openai import ChatOpenAI
from langchain.agents import create_tool_calling_agent, AgentExecutor
from langchain_core.prompts import ChatPromptTemplate, MessagesPlaceholder

tools = create_memstate_tools(
    api_key="mst_your_key_here",
    project_id="my-app",
)

prompt = ChatPromptTemplate.from_messages([
    ("system", """You are a helpful assistant with persistent memory.

Before starting any task, use memstate_get to check what you already know.
After completing a task, use memstate_remember to save what you learned.
Use memstate_search when you need to find information by meaning.

Memory workflow:
1. CHECK: memstate_get(keypath="") → see full project knowledge
2. ACT: Complete the task using retrieved context
3. SAVE: memstate_remember(content="## Summary\\n- What was done") → persist learnings
"""),
    MessagesPlaceholder(variable_name="chat_history"),
    ("human", "{input}"),
    MessagesPlaceholder(variable_name="agent_scratchpad"),
])

llm = ChatOpenAI(model="gpt-4o")
agent = create_tool_calling_agent(llm, tools, prompt)
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)

# The agent will automatically use Memstate to remember and recall
executor.invoke({"input": "Set up authentication for our FastAPI project using OAuth2.", "chat_history": []})
executor.invoke({"input": "What authentication approach did we decide on?", "chat_history": []})
```

---

## Available Tools

When using `create_memstate_tools()`, the following tools are available to your agent:

| Tool | Purpose |
|---|---|
| `memstate_remember` | Save markdown/summaries → server extracts keypaths automatically (preferred for saving) |
| `memstate_set` | Set one keypath = one short value (e.g. `config.port = "8080"`) |
| `memstate_get` | Browse project keypaths, get memory content |
| `memstate_search` | Semantic search across memories |
| `memstate_history` | View version history of a keypath or memory |
| `memstate_delete` | Soft-delete a keypath (creates tombstone, history preserved) |

Select specific tools:

```python
tools = create_memstate_tools(
    api_key="mst_...",
    project_id="my-app",
    include=["memstate_remember", "memstate_get", "memstate_search"],
)
```

---

## Memory Architecture

Memstate uses a **hierarchical keypath system** to organize memories:

```
project.my-app
├── auth
│   ├── provider          → "OAuth2 with Google"
│   └── jwt.secret_key    → "stored in env var JWT_SECRET"
├── database
│   ├── type              → "PostgreSQL 15"
│   └── connection_pool   → "max_connections=20"
└── deployment
    ├── platform          → "Railway"
    └── domain            → "api.myapp.com"
```

Every memory write creates a new **version**, preserving full history. The server automatically detects conflicts and can cascade-invalidate related memories when a fact changes.

---

## Configuration Reference

### MemstateMemory / MemstateSemanticMemory

`MemstateMemory` is an alias for `MemstateSemanticMemory`.

```python
MemstateMemory(
    api_key="mst_...",              # Required: Memstate API key
    project_id="my-app",            # Required: project namespace
    session_id="default",           # Optional: scope per user/session
    base_url="https://api.memstate.ai",  # Optional: custom API URL
    timeout=60,                     # Optional: HTTP timeout (seconds)
    search_limit=10,                # Optional: memories retrieved per turn
    memory_key="relevant_memories", # Optional: key injected into chain input dict
    input_key="input",              # Optional: chain input key for human message
    output_key="content",           # Optional: chain output key for AI response
    human_prefix="Human",           # Optional: label for human turns
    ai_prefix="AI",                 # Optional: label for AI turns
    ingest_source="langchain",      # Optional: source label in dashboard
    wait_for_ingest=False,          # Optional: wait for async ingest to complete
)

# Usage
memory = MemstateMemory(api_key="mst_...", project_id="my-app")
chain_with_memory = memory.wrap(prompt | llm)      # wrap any LCEL chain
memories = memory.get_memories(query="auth")       # semantic search
all_memories = memory.get_memories()               # browse all
memory.clear()                                     # delete all project memories
```

### MemstateChatMessageHistory

```python
MemstateChatMessageHistory(
    api_key="mst_...",              # Required: Memstate API key
    project_id="my-assistant",      # Required: project namespace
    session_id="user-123",          # Required: unique session identifier
    base_url="https://api.memstate.ai",  # Optional
    timeout=60,                     # Optional
)
```

### create_memstate_tools

```python
create_memstate_tools(
    api_key="mst_...",              # Required
    project_id="my-app",            # Required
    base_url="https://api.memstate.ai",  # Optional
    timeout=60,                     # Optional
    include=None,                   # Optional: list of tool names to include
    # e.g. include=["memstate_remember", "memstate_search"]
)
```

---

## Low-Level Client

For direct API access:

```python
from memstate_langchain import MemstateClient

client = MemstateClient(api_key="mst_...")

# Store a structured memory at an explicit keypath
client.remember(
    content="PostgreSQL 15 on Railway",
    keypath="project.my-app.database.type",
    project_id="my-app",
    category="fact",
)

# Ingest rich content (server extracts keypaths automatically)
client.ingest(
    project_id="my-app",
    content="## Auth Decision\nWe chose OAuth2 with Google because...",
    source="agent",
)

# Semantic search
results = client.search(query="database configuration", project_id="my-app")
for r in results["results"]:
    print(r["keypath"], r["score"], r["summary"])

# Browse all memories in a project
memories = client.browse(
    keypath_prefix="project.my-app",
    project_id="my-app",
)

# Version history
history = client.history(keypath="project.my-app.auth.provider", project_id="my-app")
```

---

## Environment Variables

```bash
MEMSTATE_API_KEY=mst_your_key_here
MEMSTATE_BASE_URL=https://api.memstate.ai   # optional
```

Using environment variables:

```python
import os
from memstate_langchain import MemstateMemory

memory = MemstateMemory(
    api_key=os.environ["MEMSTATE_API_KEY"],
    project_id="my-app",
)
```

---

## Running the Tests

```bash
cd integrations/langchain
pip install -e ".[dev]"

# Unit tests (no API key required — uses mocked HTTP)
pytest tests/test_unit.py -v

# Integration tests (requires MEMSTATE_API_KEY)
MEMSTATE_API_KEY=mst_... pytest tests/test_integration.py -v

# Full workflow test
MEMSTATE_API_KEY=mst_... python tests/test_workflow.py
```

---

## Links

- **Website**: [memstate.ai](https://memstate.ai)
- **Documentation**: [memstate.ai/docs](https://memstate.ai/docs)
- **Dashboard** (get API key): [memstate.ai/dashboard](https://memstate.ai/dashboard)
- **MCP Server** (open source): [github.com/memstate-ai/memstate-mcp](https://github.com/memstate-ai/memstate-mcp)
- **Benchmark**: [memstate.ai/docs/benchmarks](https://memstate.ai/docs/benchmarks)
- **npm package**: [@memstate/mcp](https://www.npmjs.com/package/@memstate/mcp)

---

## License

MIT License — see [LICENSE](../../LICENSE) for details.
