Metadata-Version: 2.4
Name: hyperstack-langgraph
Version: 1.5.4
Summary: The Agent Provenance Graph for LangGraph agents. Timestamped facts, auditable decisions, deterministic trust. Prove what agents knew, trace why they knew it, coordinate without an LLM in the loop. $0 per operation.
Author-email: CascadeAI <deeq@cascadeai.dev>
License: MIT
Project-URL: Homepage, https://cascadeai.dev/hyperstack
Project-URL: Repository, https://github.com/deeqyaqub1-cmd/hyperstack-langgraph
Project-URL: Documentation, https://cascadeai.dev/hyperstack
Keywords: langgraph,memory,knowledge-graph,ai-agents,hyperstack,langchain,multi-agent,portable-memory
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.9
Description-Content-Type: text/markdown
Provides-Extra: langgraph
Requires-Dist: langgraph>=0.2; extra == "langgraph"
Requires-Dist: langchain-core>=0.3; extra == "langgraph"

# hyperstack-langgraph

**The Agent Provenance Graph for AI agents** — the only memory layer where agents can prove what they knew, trace why they knew it, and coordinate without an LLM in the loop. $0 per operation at any scale.

*Timestamped facts. Auditable decisions. Deterministic trust. Build agents you can trust at $0/operation.*

Knowledge graph memory for LangGraph agents. Developer-controlled, zero LLM cost, time-travel debugging.

## Install

```bash
pip install hyperstack-langgraph langchain-core langgraph
```

Current version: **v1.5.3**

## Quick Start (3 lines)

```python
from hyperstack_langgraph import create_memory_agent
from langchain_openai import ChatOpenAI

agent = create_memory_agent(ChatOpenAI(model="gpt-4o"))
```

That's it. Your agent now has persistent knowledge graph memory. It will:
- **Search memory** at the start of every conversation
- **Store important facts** when decisions are made (with user confirmation)
- **Traverse the graph** to answer "what depends on X?" or "who decided Y?"

## Environment Variables

```bash
export HYPERSTACK_API_KEY=hs_your_key    # Get free at cascadeai.dev/hyperstack
export HYPERSTACK_WORKSPACE=default
export OPENAI_API_KEY=sk-...             # For your LLM
```

## Usage: Add Memory Tools to Existing Agent

```python
from hyperstack_langgraph import create_hyperstack_tools
from langgraph.prebuilt import create_react_agent
from langchain_openai import ChatOpenAI

# Create memory tools
memory_tools = create_hyperstack_tools()

# Add to your existing tools
my_tools = [my_calculator, my_web_search] + memory_tools

# Create agent with memory
agent = create_react_agent(ChatOpenAI(model="gpt-4o"), my_tools)

# Use it
result = agent.invoke(
    {"messages": [{"role": "user", "content": "What do we know about our auth setup?"}]},
    config={"configurable": {"thread_id": "session-1"}}
)
```

## Usage: Full Agent with Session Memory

```python
from hyperstack_langgraph import create_memory_agent
from langchain_openai import ChatOpenAI
from langgraph.checkpoint.memory import MemorySaver

agent = create_memory_agent(
    ChatOpenAI(model="gpt-4o"),
    checkpointer=MemorySaver(),  # Session memory (optional)
)

config = {"configurable": {"thread_id": "project-alpha"}}

# First message — agent searches HyperStack for context
agent.invoke({"messages": [{"role": "user", "content": "Let's work on the auth system"}]}, config)

# Agent remembers within session (MemorySaver) AND across sessions (HyperStack)
agent.invoke({"messages": [{"role": "user", "content": "We decided to use Clerk"}]}, config)
```

## Usage: Direct API Client

Use `HyperStackClient` (not HyperStackMemory).

```python
from hyperstack_langgraph import HyperStackClient

client = HyperStackClient()

# Store
client.store("use-clerk", "Use Clerk for Auth", "Chose Clerk over Auth0",
             card_type="decision", keywords=["clerk", "auth"],
             links=[{"target": "alice", "relation": "decided"}])

# Search
client.search("authentication")

# Get
client.get("use-clerk")

# Graph traversal
client.graph("use-clerk", depth=2)

# Blockers / impact
client.blockers("deploy-prod")
client.impact("use-clerk")

# Time-travel (reconstruct at timestamp)
client.graph("use-clerk", depth=2, at="2026-02-01T00:00:00Z")

# Report success/failure (updates utility scores on edges)
client.feedback(card_slugs=["use-clerk", "auth-api"], outcome="success")

# Ingest conversation transcript
client.auto_remember("We decided to use Clerk. Alice owns auth.")

# Memory hub: working / semantic / episodic
cards = client.hs_memory(surface="semantic")

# Batch store
client.bulk_store([{"slug": "p1", "title": "Project A", "body": "..."}, ...])

# Agentic routing (deterministic, no LLM)
can_do = client.can("auth-api", action="deploy")
steps = client.plan("auth-api", goal="add 2FA")
```

## REST API

Always use `X-API-Key`, never `Authorization: Bearer`:

```bash
curl -X POST "https://hyperstack-cloud.vercel.app/api/cards" \
  -H "X-API-Key: hs_your_key" \
  -H "Content-Type: application/json" \
  -d '{"slug":"use-clerk","title":"Use Clerk","body":"Auth decision"}'
```

## Card Fields

| Field | Description |
|-------|-------------|
| `confidence` | 0.0–1.0 |
| `truthStratum` | `draft` \| `hypothesis` \| `confirmed` |
| `verifiedBy` | e.g. `"human:deeq"` |
| `verifiedAt` | Auto-set server-side |
| `memoryType` | `working` \| `semantic` \| `episodic` |
| `ttl` | Working memory expiry |
| `sourceAgent` | Auto-stamped after `identify()` |

## Tools Provided

| Tool | Description |
|------|-------------|
| `hyperstack_search` | Search memory for relevant context |
| `hyperstack_store` | Save a fact, decision, preference, or person |
| `hyperstack_graph` | Traverse knowledge graph (impact, blockers, decision trails) |
| `hyperstack_list` | List all stored memories |
| `hyperstack_delete` | Remove outdated memories |

## Backend Features

- **Conflict detection** — structural, no LLM, auto-detects contradicting cards
- **Staleness cascade** — upstream changes mark dependents stale
- **Three memory surfaces** — working (TTL), semantic (permanent), episodic (30-day decay)
- **Decision replay** — reconstruct agent state at decision time + hindsight detection
- **Time-travel** — `graph()` with `at=` timestamp
- **Self-hosting** — Docker + `HYPERSTACK_BASE_URL` env var

## Why HyperStack?

- **Provenance tracking** — timestamped facts, auditable decisions
- **Decision replay** — reconstruct what agents knew at decision time
- **Deterministic trust** — no LLM in the loop for coordination
- **$0 per operation** — Mem0/Zep charge ~$0.002 per op; HyperStack: $0
- **30-second setup** — No Neo4j, no Docker, no OpenSearch. One API key, done.

## Pricing

| Plan | Cards | Price |
|------|-------|-------|
| Free | 50 | $0 — ALL features including graph |
| Pro | 500+ | $29/mo |
| Team | 500, 5 API keys | $59/mo |
| Business | 2,000, 20 members | $149/mo |

Get a free API key at [cascadeai.dev/hyperstack](https://cascadeai.dev/hyperstack)

## License

MIT
