Metadata-Version: 2.4
Name: raindrop-openai-agents
Version: 0.0.3
Summary: Raindrop integration for OpenAI Agents SDK — automatic tracing of agent runs, LLM generations, tool calls, and handoffs.
Project-URL: Homepage, https://raindrop.ai
Project-URL: Documentation, https://docs.raindrop.ai
Project-URL: Repository, https://github.com/invisible-tools/dawn/tree/main/packages/openai-agents-python
Author-email: Raindrop AI <sdk@raindrop.ai>
License-Expression: MIT
License-File: LICENSE
Classifier: Typing :: Typed
Requires-Python: >=3.10
Requires-Dist: raindrop-ai>=0.0.42
Provides-Extra: agents
Requires-Dist: openai-agents>=0.1.0; extra == 'agents'
Provides-Extra: dev
Requires-Dist: pytest-asyncio>=0.23.0; extra == 'dev'
Requires-Dist: pytest>=8.0; extra == 'dev'
Requires-Dist: requests>=2.28; extra == 'dev'
Description-Content-Type: text/markdown

# raindrop-openai-agents

Raindrop integration for the [OpenAI Agents SDK](https://github.com/openai/openai-agents-python) (Python). Implements a `TracingProcessor` that automatically captures agent runs, LLM generations, tool calls, and handoffs and ships them to [Raindrop](https://raindrop.ai).

## Installation

```bash
pip install raindrop-openai-agents openai-agents
```

## Quick Start

```python
from raindrop_openai_agents import RaindropOpenAIAgents
from agents import Agent, Runner

raindrop = RaindropOpenAIAgents(
    api_key="your-write-key",
    user_id="user-123",
)
# Processor is auto-registered with the global trace provider

agent = Agent(name="Assistant", model="gpt-4o", instructions="Be helpful")
result = Runner.run_sync(agent, "Hello!")
print(result.final_output)

raindrop.flush()
```

### Factory Function (Legacy)

The `create_raindrop_openai_agents()` factory is still available and now returns a `RaindropOpenAIAgents` instance:

```python
from raindrop_openai_agents import create_raindrop_openai_agents

raindrop = create_raindrop_openai_agents(api_key="your-write-key", user_id="user-123")
raindrop.flush()
```

## What Gets Captured

- **Agent runs** — trace-level events with workflow name
- **LLM generations** — model, input messages, output text, token usage
- **Tool calls** — individual tool spans with name, input, output, duration, and error tracking via the Interaction API
- **Finish reason** — extracted from response status or generation output (`ai.finish_reason`)
- **Extended token categories** — cached tokens (`ai.usage.cached_tokens`) and reasoning/thoughts tokens (`ai.usage.thoughts_tokens`) for OpenAI o1/o3 models
- **Errors** — error type and message captured in event properties (never interferes with agent execution)

## API Reference

### `RaindropOpenAIAgents(api_key, user_id=None, convo_id=None, tracing_enabled=True, bypass_otel_for_tools=True, debug=False)`

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `api_key` | `Optional[str]` | `None` | Raindrop API key (omit to disable telemetry) |
| `user_id` | `Optional[str]` | `"unknown"` | Default user identifier for all events |
| `convo_id` | `Optional[str]` | `None` | Group events into a conversation |
| `tracing_enabled` | `bool` | `True` | Enable OTEL-based tracing |
| `bypass_otel_for_tools` | `bool` | `True` | Bypass OTEL instrumentation for tool calls |
| `debug` | `bool` | `False` | Enable verbose debug logging |

#### Properties

| Name | Type | Description |
|------|------|-------------|
| `processor` | `RaindropTracingProcessor` | The underlying tracing processor (for manual registration) |

#### Methods

| Method | Description |
|--------|-------------|
| `flush()` | Flush buffered events to Raindrop |
| `shutdown()` | Flush events and release resources |
| `identify(user_id, traits=None)` | Identify a user with optional traits |
| `track_signal(event_id, name, signal_type, *, timestamp, properties, attachment_id, comment, after, sentiment)` | Attach a signal (feedback, label, etc.) to an existing event |

## Tool Call Tracking

Tool calls made by agents are automatically captured as individual tool spans. Each span includes:

- **Name** — the function/tool name
- **Input** — the arguments passed to the tool
- **Output** — the tool's return value
- **Duration** — execution time in milliseconds (computed from SDK span timestamps)
- **Error** — error message if the tool call failed

Tool spans are tracked via the Raindrop Interaction API (`interaction.track_tool()`), providing full visibility into agent tool usage alongside LLM generation data.

## Extended Token Categories

For OpenAI o1/o3 models that report detailed token breakdowns, the integration captures:

| Property | Source | Description |
|----------|--------|-------------|
| `ai.usage.cached_tokens` | `input_tokens_details.cached_tokens` or `prompt_tokens_details.cached_tokens` | Tokens served from cache |
| `ai.usage.thoughts_tokens` | `output_tokens_details.reasoning_tokens` or `completion_tokens_details.reasoning_tokens` | Tokens used for internal reasoning |

These are reported alongside the standard `ai.usage.prompt_tokens` and `ai.usage.completion_tokens`.

## Debug Mode

```python
raindrop = RaindropOpenAIAgents(
    api_key="your-write-key",
    debug=True,  # enable verbose logging
)
```

## Identify Users

```python
raindrop.identify("user-42", traits={"plan": "pro", "company": "Acme"})
```

## Track Signals

```python
raindrop.track_signal(
    event_id="evt_abc123",
    name="thumbs_up",
    signal_type="feedback",
    sentiment="POSITIVE",
    comment="Great answer!",
)
```

## Flush & Shutdown

```python
raindrop.flush()     # flush pending data
raindrop.shutdown()  # flush + release resources
```

## Known Limitations

- **Multi-response traces** — in multi-agent workflows, only the last response's data survives per trace.

## Full Documentation

[docs.raindrop.ai/integrations/openai-agents](https://docs.raindrop.ai/integrations/openai-agents)

## Testing

```bash
cd packages/openai-agents-python
pip install -e ".[dev]"
python -m pytest tests/ -v
```

## License

MIT
