Metadata-Version: 2.4
Name: fencio-proxy-client
Version: 0.1.0
Summary: Python SDK for Fencio HTTP Proxy - Agent observability and tracing
Home-page: https://github.com/fencio/proxy
Author: Fencio
Author-email: support@fencio.io
Keywords: proxy http observability tracing agent monitoring
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Internet :: Proxy Servers
Requires-Python: >=3.7
Description-Content-Type: text/markdown
Provides-Extra: requests
Requires-Dist: requests>=2.25.0; extra == "requests"
Provides-Extra: httpx
Requires-Dist: httpx>=0.18.0; extra == "httpx"
Provides-Extra: all
Requires-Dist: requests>=2.25.0; extra == "all"
Requires-Dist: httpx>=0.18.0; extra == "all"
Dynamic: author
Dynamic: author-email
Dynamic: classifier
Dynamic: description
Dynamic: description-content-type
Dynamic: home-page
Dynamic: keywords
Dynamic: provides-extra
Dynamic: requires-python
Dynamic: summary

# Fencio Proxy Client SDK

Python SDK for integrating with the Fencio HTTP proxy for agent observability and tracing.

## Installation

```bash
pip install -e .
```

Or with optional dependencies:

```bash
# For requests library support
pip install -e ".[requests]"

# For httpx library support
pip install -e ".[httpx]"

# For all HTTP libraries
pip install -e ".[all]"
```

## Quick Start

```python
from fencio_proxy_client import FencioProxyClient
import requests

# Initialize the client
client = FencioProxyClient(
    agent_id="my_threat_agent",
    api_key="your_api_key_here",
    proxy_url="http://localhost:8080"
)

# Start routing requests through Fencio proxy
client.start()

# All HTTP requests are now traced
response = requests.get("http://httpbin.org/get")

# Stop when done
client.stop()
```

## Context Manager Support

```python
from fencio_proxy_client import FencioProxyClient
import requests

# Use as a context manager
with FencioProxyClient(agent_id="my_agent", api_key="secret") as client:
    # Requests inside this block are traced
    response = requests.get("http://httpbin.org/get")

# Automatically stops when exiting the block
```

## Layer Classification

You can classify requests by layer (e.g., LLM calls vs tool calls):

```python
from fencio_proxy_client import FencioProxyClient
import requests

client = FencioProxyClient(agent_id="my_agent", api_key="secret")
client.start()

# Mark LLM requests
with client.layer_context('llm', {'model': 'gpt-4', 'provider': 'openai'}):
    response = requests.post(
        'https://api.openai.com/v1/chat/completions',
        json={'model': 'gpt-4', 'messages': [{'role': 'user', 'content': 'Hello'}]}
    )

# Mark tool requests
with client.layer_context('tool', {'tool': 'web_search'}):
    response = requests.get('http://api.example.com/search?q=test')

client.stop()
```

## Session Management

```python
client = FencioProxyClient(
    agent_id="my_agent",
    api_key="secret",
    session_id="custom_session_123"  # Optional: specify your own session ID
)

# Or let the SDK generate a session ID automatically
client = FencioProxyClient(
    agent_id="my_agent",
    api_key="secret",
    auto_session=True  # Default: generates UUID session ID
)

# Generate a new session ID on the fly
new_session = client.new_session()
print(f"Started new session: {new_session}")
```

## Supported HTTP Libraries

The SDK automatically patches these HTTP libraries:

- **requests** - Most popular HTTP library
- **httpx** - Modern async-capable HTTP client
- **urllib3** - Low-level HTTP library

You can choose which libraries to patch:

```python
# Only patch requests
client = FencioProxyClient(
    agent_id="my_agent",
    api_key="secret",
    patch_libraries=['requests']
)

# Patch multiple libraries
client = FencioProxyClient(
    agent_id="my_agent",
    api_key="secret",
    patch_libraries=['requests', 'httpx']
)
```

## Integration with LangChain Agents

```python
from langchain.agents import initialize_agent, Tool
from langchain.llms import OpenAI
from fencio_proxy_client import FencioProxyClient

# Initialize Fencio client
fencio = FencioProxyClient(
    agent_id="langchain_threat_agent",
    api_key="your_api_key"
)
fencio.start()

# Define tools
tools = [
    Tool(
        name="ThreatIntel",
        func=lambda x: requests.get(f"http://threat-api.com/lookup?q={x}").json(),
        description="Look up threat intelligence data"
    )
]

# All LLM calls and tool calls are now traced through Fencio
llm = OpenAI(temperature=0)
agent = initialize_agent(tools, llm, agent="zero-shot-react-description")
result = agent.run("What is the threat level for IP 192.168.1.1?")

fencio.stop()
```

## Configuration

### Environment Variables

You can also configure the client using environment variables:

```bash
export FENCIO_AGENT_ID="my_agent"
export FENCIO_API_KEY="secret_key"
export FENCIO_PROXY_URL="http://localhost:8080"
```

```python
import os
from fencio_proxy_client import FencioProxyClient

client = FencioProxyClient(
    agent_id=os.getenv('FENCIO_AGENT_ID'),
    api_key=os.getenv('FENCIO_API_KEY'),
    proxy_url=os.getenv('FENCIO_PROXY_URL', 'http://localhost:8080')
)
```

## API Reference

### FencioProxyClient

#### `__init__(agent_id, api_key=None, proxy_url="http://localhost:8080", session_id=None, auto_session=True, patch_libraries=None)`

Initialize the Fencio proxy client.

**Parameters:**
- `agent_id` (str): Unique identifier for this agent
- `api_key` (str, optional): API key for authentication
- `proxy_url` (str): URL of the Fencio proxy server (default: "http://localhost:8080")
- `session_id` (str, optional): Session identifier (auto-generated if not provided)
- `auto_session` (bool): Auto-generate session ID if not provided (default: True)
- `patch_libraries` (list, optional): List of HTTP libraries to patch (default: all available)

#### `start()`

Start routing HTTP requests through Fencio proxy.

#### `stop()`

Stop routing HTTP requests through Fencio proxy.

#### `is_active() -> bool`

Check if the client is currently active.

#### `get_headers(layer=None, layer_metadata=None) -> dict`

Get Fencio headers to inject into HTTP requests.

**Parameters:**
- `layer` (str, optional): Layer classification (e.g., 'llm', 'tool')
- `layer_metadata` (dict, optional): Additional metadata for the layer

**Returns:**
- Dictionary of HTTP headers

#### `new_session() -> str`

Generate a new session ID and update the client.

**Returns:**
- The new session ID

#### `layer_context(layer, metadata=None)`

Context manager for setting layer classification for a block of code.

**Parameters:**
- `layer` (str): Layer classification (e.g., 'llm', 'tool')
- `metadata` (dict, optional): Additional metadata for the layer

## How It Works

1. **Library Patching**: When you call `client.start()`, the SDK patches common HTTP libraries (requests, httpx, urllib3) to route all HTTP traffic through the Fencio proxy.

2. **Header Injection**: For each HTTP request, the SDK automatically injects these headers:
   - `X-Fencio-Agent-ID`: Your agent identifier
   - `X-Fencio-API-Key`: Your API key (if provided)
   - `X-Fencio-Session-ID`: Session identifier
   - `X-Fencio-Layer`: Layer classification (if using `layer_context`)
   - `X-Fencio-Layer-Metadata`: JSON-encoded layer metadata

3. **Proxy Routing**: All HTTP/HTTPS requests are automatically routed through the Fencio proxy at the specified `proxy_url`.

4. **Trace Storage**: The Fencio proxy captures complete request/response data and stores it with agent context for later analysis.

## License

MIT License
