Metadata-Version: 2.4
Name: fangcun-hook-sdk
Version: 0.1.0
Summary: Inline-hook client SDK for FangcunGuard. Lets agents call the FangcunGuard Hook server for input/output safety scanning without changing their LLM URL or API key.
Author: FangcunGuard
License: Apache-2.0
Project-URL: Homepage, https://github.com/fangcun-ai/fangcunguard-hook
Project-URL: Source, https://github.com/fangcun-ai/fangcunguard-hook
Keywords: llm,guardrails,ai-safety,agent,hook,fangcun
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Security
Requires-Python: >=3.9
Description-Content-Type: text/markdown
Requires-Dist: httpx>=0.25
Provides-Extra: openai
Requires-Dist: openai>=1.30; extra == "openai"
Provides-Extra: anthropic
Requires-Dist: anthropic>=0.30; extra == "anthropic"
Provides-Extra: litellm
Requires-Dist: litellm>=1.30; extra == "litellm"
Provides-Extra: agents
Requires-Dist: openai-agents>=0.0.1; extra == "agents"
Provides-Extra: langchain
Requires-Dist: langchain-core>=0.2; extra == "langchain"
Provides-Extra: llamaindex
Requires-Dist: llama-index-core>=0.10; extra == "llamaindex"
Provides-Extra: autogen
Requires-Dist: autogen-core>=0.4; extra == "autogen"
Provides-Extra: crewai
Requires-Dist: crewai>=0.30; extra == "crewai"
Provides-Extra: dev
Requires-Dist: pytest>=7; extra == "dev"
Requires-Dist: pytest-asyncio>=0.21; extra == "dev"
Requires-Dist: ruff>=0.1; extra == "dev"

# fangcun-hook-sdk

Inline-hook client SDK for [FangcunGuard](../../). Lets your agent call a
FangcunGuard Hook server for input/output safety scanning **without changing
the agent's LLM URL or API key**.

```
agent (your code, your LLM key, your LLM URL)
   │
   ├─ pre-LLM   ──► hook.scan_input(messages)   ──► allow / block / replace / anonymize
   ├─ call your LLM as usual
   ├─ post-LLM  ──► hook.scan_output(content)   ──► allow / block / replace / restore
   ▼
agent receives the final message
```

## Install

```bash
pip install fangcun-hook-sdk

# optional: bring the framework you actually use
pip install "fangcun-hook-sdk[openai]"   # raw OpenAI client wrapper
pip install "fangcun-hook-sdk[agents]"   # OpenAI Agents SDK guardrails
```

## Minimal example (raw OpenAI client)

```python
from openai import OpenAI
from fangcun_hook_sdk import HookClient
from fangcun_hook_sdk.adapters.openai_raw import wrap_openai

# Your existing LLM client — unchanged.
llm = OpenAI(api_key="sk-...", base_url="https://api.openai.com/v1")

# Wrap it once. The agent code below sees no difference.
hook = HookClient(base_url="http://localhost:5002", api_key="sk-xxai-...")
llm = wrap_openai(llm, hook)

resp = llm.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "My email is john@example.com"}],
)
print(resp.choices[0].message.content)
# PII is anonymized before reaching the LLM and restored on the way back.
```

## Minimal example (OpenAI Agents SDK)

```python
from agents import Agent, Runner
from fangcun_hook_sdk import AsyncHookClient
from fangcun_hook_sdk.adapters.openai_agents import (
    make_input_guardrail, make_output_guardrail,
)

hook = AsyncHookClient(base_url="http://localhost:5002", api_key="sk-xxai-...")

agent = Agent(
    name="support-bot",
    instructions="You are a helpful assistant.",
    input_guardrails=[make_input_guardrail(hook)],
    output_guardrails=[make_output_guardrail(hook)],
)

result = await Runner.run(agent, "What is the weather in Tokyo?")
```

## Low-level API

If your framework isn't covered, call the client directly:

```python
from fangcun_hook_sdk import HookClient, Action

hook = HookClient(base_url="http://localhost:5002", api_key="sk-xxai-...")

scan_in = hook.scan_input(messages=[{"role": "user", "content": "..."}])
if scan_in.action == Action.BLOCK:
    raise RuntimeError(scan_in.message)
elif scan_in.action == Action.ANONYMIZE:
    messages_to_send = scan_in.anonymized_messages
    restore_mapping  = scan_in.restore_mapping
else:
    messages_to_send = original_messages
    restore_mapping  = None

# ... call your LLM with messages_to_send ...

scan_out = hook.scan_output(
    content=llm_reply,
    restore_mapping=restore_mapping,
)
if scan_out.action == Action.BLOCK:
    raise RuntimeError(scan_out.message)
final_text = scan_out.content if scan_out.action == Action.RESTORE else llm_reply
```

## Failure mode

By default the SDK is **fail-open**: if the hook server is unreachable or
returns an error, calls return `Action.PASS` and the agent keeps working.
Pass `fail_open=False` to surface those errors as exceptions instead.

```python
hook = HookClient(..., fail_open=False)
```

## Talking to a legacy server

If you're pointing at an older FangcunGuard server that only exposes
`/v1/gateway/process-input` and `/v1/gateway/process-output`, set:

```python
hook = HookClient(..., primary_path="/v1/gateway")
```

The new server registers both prefixes, so most users don't need this.
