Metadata-Version: 2.4
Name: sologate-langchain
Version: 0.1.0
Summary: Human-in-the-loop approval gates for LangChain agents via Sologate
License: MIT
Project-URL: Homepage, https://www.sologate.app
Project-URL: Repository, https://github.com/sologate/sologate-langchain
Project-URL: Documentation, https://www.sologate.app/docs
Keywords: langchain,ai-agents,safety,human-in-the-loop,governance
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Requires-Python: >=3.9
Description-Content-Type: text/markdown
Requires-Dist: requests>=2.28
Requires-Dist: websocket-client>=1.6
Provides-Extra: langchain
Requires-Dist: langchain-core>=0.1; extra == "langchain"

# sologate-langchain

Human-in-the-loop approval gates for LangChain agents.

When your agent is about to do something risky — delete files, send bulk email, make API calls — Sologate pauses it and routes the action to a human for approval. One approve/reject click. Full audit trail. Agent resumes automatically.

```
pip install sologate-langchain
```

---

## 3-line integration

```python
from sologate_langchain import SologateCallbackHandler

handler = SologateCallbackHandler(
    sologate_url="https://www.sologate.app",
    api_key="at_your_key_here",
)

agent_executor = AgentExecutor(agent=agent, tools=tools, callbacks=[handler])
```

That's it. Every tool call is scored automatically. Low-risk actions pass through silently. High-risk actions pause your agent and open an approval request in the [Sologate Decision Center](https://www.sologate.app/decisions).

---

## How it works

```
Agent calls tool
      ↓
SologateCallbackHandler.on_tool_start()
      ↓
Risk score 0–100 calculated locally (no API call for low-risk)
      ↓
score < threshold (60)?  →  runs silently ✓
score ≥ threshold?       →  POST /api/agent/request-approval
                              ↓
                         Agent PAUSES (WebSocket wait)
                              ↓
                    Human approves/rejects in dashboard
                              ↓
                    Approved → agent continues ✓
                    Rejected → GateRejectedError raised ✗
```

---

## Full example

```python
import os
from langchain_openai import ChatOpenAI
from langchain.agents import AgentExecutor, create_react_agent
from langchain_community.tools import ShellTool, WriteFileTool
from langchain import hub
from sologate_langchain import SologateCallbackHandler

llm = ChatOpenAI(model="gpt-4o")
tools = [ShellTool(), WriteFileTool()]
prompt = hub.pull("hwchase17/react")

agent = create_react_agent(llm, tools, prompt)

handler = SologateCallbackHandler(
    sologate_url=os.environ["SOLOGATE_URL"],
    api_key=os.environ["SOLOGATE_KEY"],
    threshold=60,  # gate anything scored ≥ 60/100
)

agent_executor = AgentExecutor(agent=agent, tools=tools, callbacks=[handler])
result = agent_executor.invoke({"input": "Clean up the project folder"})
```

When the agent tries `rm -rf ./backups/` it scores **97/100 — HIGH** and your terminal shows:

```
[sologate] 🔴 Gating: terminal (score 97/100 — HIGH)
[sologate] Reason: Shell command contains rm -rf (irreversible bulk deletion)
[sologate] Flags:  • rm -rf detected
[sologate] Waiting for human decision at https://www.sologate.app/decisions...
```

Approve in the dashboard → agent continues. Reject → `GateRejectedError` raised.

---

## Environment variables

```bash
export SOLOGATE_URL=https://www.sologate.app
export SOLOGATE_KEY=at_your_key_here
```

Or pass directly to `SologateCallbackHandler(sologate_url=..., api_key=...)`.

---

## Risk scoring

Scores are calculated locally — no network round-trip for safe actions.

| Score | Tier   | Examples                                      |
|-------|--------|-----------------------------------------------|
| 90+   | HIGH   | `rm -rf`, `DROP TABLE`, `curl \| sh`          |
| 76–89 | HIGH   | `sudo`, `--force`, credential file writes     |
| 60–75 | HIGH   | Bulk email, HTTP DELETE, force push           |
| 30–59 | MEDIUM | Shell exec, file writes, outbound messages    |
| 0–29  | LOW    | Read operations, searches — auto-approved     |

Customize the threshold:
```python
handler = SologateCallbackHandler(..., threshold=80)  # only gate HIGH risk
```

---

## Low-level gate function

For custom agent frameworks (not LangChain):

```python
from sologate_langchain import gate, GateRejectedError

try:
    gate(
        "delete_customer_records",
        sologate_url="https://www.sologate.app",
        api_key="at_...",
        context="Agent is about to delete 312 inactive customer records",
        payload={"count": 312, "table": "customers"},
    )
    # human approved — proceed
    db.execute("DELETE FROM customers WHERE status='inactive'")

except GateRejectedError:
    print("Rejected — no records deleted")
```

---

## Links

- [Sologate dashboard](https://www.sologate.app)
- [Get an API key](https://www.sologate.app/settings)
- [Full docs](https://www.sologate.app/docs)
- [Node.js / OpenClaw integration](https://www.npmjs.com/package/sologate-openclaw)
