Metadata-Version: 2.4
Name: stryda-langgraph
Version: 0.1.0
Summary: Stryda governance as a LangGraph node — plug it between the agent LLM and the tool executor.
Project-URL: Homepage, https://stryda.ai
Project-URL: Documentation, https://docs.stryda.ai
Project-URL: Repository, https://github.com/Srujyama/Stryda
Author: Stryda
License: MIT
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.10
Requires-Dist: langchain-core<2,>=0.3
Requires-Dist: langgraph>=0.2
Requires-Dist: stryda-sdk>=0.1
Provides-Extra: test
Requires-Dist: httpx>=0.26; extra == 'test'
Requires-Dist: pytest-asyncio>=0.23; extra == 'test'
Requires-Dist: pytest>=7; extra == 'test'
Description-Content-Type: text/markdown

# stryda-langgraph

Governance as a LangGraph node. Plug it between your agent LLM and your tool executor, and every tool call in the graph is gated on a Stryda `stryda.check_action` decision before it runs.

## Install

Not on PyPI yet. Install from this monorepo:

```bash
pip install -e ./packages/stryda-sdk-python
pip install -e ./packages/stryda-langgraph
```

Requires Python 3.10+, `langchain-core >= 0.3`, `langgraph >= 0.2`.

## Quickstart

```python
from langgraph.graph import StateGraph, END, MessagesState
from langgraph.prebuilt import ToolNode
from stryda_sdk import StrydaClient
from stryda_langgraph import stryda_policy_node

stryda = StrydaClient(api_key=os.environ["STRYDA_API_KEY"])

graph = StateGraph(MessagesState)
graph.add_node("agent", agent_llm_node)   # your existing LLM node
graph.add_node(
    "policy_check",
    stryda_policy_node(
        stryda_client=stryda,
        action_type_map={
            "refund":       "payments.refund",
            "send_email":   "comms.email_send",
        },
    ),
)
graph.add_node("tool_executor", ToolNode(tools))

graph.set_entry_point("agent")
graph.add_edge("agent", "policy_check")
graph.add_conditional_edges(
    "policy_check",
    lambda s: s["stryda_decision"],
    {
        "allow":    "tool_executor",
        "deny":     "agent",       # LLM sees the deny reason and re-plans
        "escalate": END,           # halt until escalation resolves
    },
)
graph.add_edge("tool_executor", "agent")

app = graph.compile()
```

## What the node reads + writes

**Reads** — `state["messages"]`. The last entry must be an `AIMessage` with `tool_calls` populated (the standard `MessagesState` pattern).

**Writes (state delta)** — three keys:

- `stryda_decision`: `"allow" | "deny" | "escalate"` — strictest wins. If any call is denied the aggregate is `deny`; if any is escalated the aggregate is `escalate`.
- `stryda_attestations`: `list[dict]` — one entry per tool call:
  ```python
  {
    "tool_call_id": "call_xyz",
    "tool":         "refund",
    "decision":     "allow",      # "allow" | "deny" | "escalate"
    "reason":       "matched scope refund.tier2",
    "check_id":     "chk_abc",    # ← use to verify the Ed25519 attestation
    "escalation_id": None,
    "approval_poll_url": None,
  }
  ```
- `messages` *(only on deny/escalate)* — a `ToolMessage` per blocked call, carrying the policy reason. Without this, a denied call leaves an open-loop tool call and most prompt templates loop. Feeding the reason back lets the LLM course-correct.

## Parallel fanout

If the LLM emits multiple tool calls in one step, the node fires `check_action` for each in parallel (`asyncio.gather`). Decisions are aggregated strictest-first — one deny halts the whole batch. No partial execution.

## Fail-closed on Stryda outage

If Stryda itself is unreachable, `check_action` raises `StrydaError` and the graph halts. The node never silently allows a tool call through on a transport failure. (This is by design — see `MISSION.md` "do-not-do" list.)

## Conditional edges — pattern

The standard wiring:

```python
graph.add_conditional_edges(
    "policy_check",
    lambda s: s["stryda_decision"],
    {"allow": "tool_executor", "deny": "agent", "escalate": END},
)
```

Variants:

- **Retry after escalation approval** — keep a "waiting" node with a sleep / human-approval webhook instead of `END`, and route back into `policy_check` once the escalation resolves. `check_action` is idempotent when called with the same `idempotency_key`, so re-checking replays the approved decision.
- **Split deny vs. escalate UX** — route `"deny"` straight to END (unrecoverable) and only `"escalate"` back to `agent` (recoverable after approval).

## Related

- [`stryda-sdk-python`](../stryda-sdk-python/README.md) — underlying HTTP client
- [`stryda-langchain`](../stryda-langchain/README.md) — same story for plain LangChain agents (non-graph)
- Mission + architecture: `MISSION.md`, `docs/system-architecture.md`
