Metadata-Version: 2.4
Name: raindrop-crewai
Version: 0.0.1
Summary: Raindrop integration for CrewAI multi-agent framework
Project-URL: Homepage, https://raindrop.ai
Project-URL: Repository, https://github.com/invisible-tools/dawn/tree/main/packages/crewai-python
Project-URL: Documentation, https://docs.raindrop.ai
Author-email: Raindrop AI <sdk@raindrop.ai>
License-Expression: MIT
License-File: LICENSE
Classifier: Typing :: Typed
Requires-Python: >=3.10
Requires-Dist: raindrop-ai>=0.0.42
Provides-Extra: crewai
Requires-Dist: crewai>=1.1.0; extra == 'crewai'
Provides-Extra: dev
Requires-Dist: pytest-asyncio>=0.23.0; extra == 'dev'
Requires-Dist: pytest>=8.0; extra == 'dev'
Requires-Dist: wrapt<2; extra == 'dev'
Description-Content-Type: text/markdown

# raindrop-crewai

Raindrop integration for [CrewAI](https://www.crewai.com/) (Python). Automatically captures crew kickoff invocations, multi-agent collaboration, task execution, and token usage by monkey-patching `Crew.kickoff*`.

## Installation

```bash
pip install raindrop-crewai crewai
```

## Quick Start

```python
from raindrop_crewai import RaindropCrewAI
from crewai import Agent, Crew, Task

raindrop = RaindropCrewAI(
    api_key="your-write-key",
    user_id="user-123",
)
raindrop.setup()  # auto-patch all Crew.kickoff* methods

agent = Agent(
    role="Senior Researcher",
    goal="Find one interesting fact about {topic}",
    backstory="You are an experienced researcher.",
)
task = Task(
    description="Identify one interesting fact about {topic}.",
    expected_output="A single sentence fact.",
    agent=agent,
)
crew = Crew(agents=[agent], tasks=[task])

result = crew.kickoff(inputs={"topic": "AI safety"})
print(result.raw)

raindrop.shutdown()
```

### Factory function (alternative)

```python
from raindrop_crewai import create_raindrop_crewai

raindrop = create_raindrop_crewai(api_key="your-write-key", user_id="user-123")
wrapped = raindrop.wrap(crew)        # per-instance wrap (no global monkey-patch)
result = wrapped.kickoff(inputs={"topic": "AI safety"})
raindrop.shutdown()
```

## What Gets Captured

- **Crew kickoff invocations** — input variables, crew name, process type (sequential/hierarchical)
- **Agent metadata** — agent roles and count
- **Task metadata** — task descriptions, count, and per-task summaries
- **Token usage** — `ai.usage.prompt_tokens`, `ai.usage.completion_tokens`, `ai.usage.cached_tokens`, `ai.usage.total_tokens`
- **Model name** — extracted from the first agent's `llm.model` (when set)
- **Errors** — `error.type` / `error.message` properties; the original exception is always re-raised
- **Async support** — `kickoff()`, `kickoff_async()`, `kickoff_for_each()`, `kickoff_for_each_async()` are all instrumented
- **Nested OTel trace spans** — crew workflow → agent → task → LLM, via the `opentelemetry-instrumentation-crewai` instrumentor (see [Tracing](#tracing); per-tool spans are not produced — see [Known Limitations](#known-limitations))

## Configuration patterns

### Auto-patch all crews (recommended)

Monkey-patches the `Crew` class so every `kickoff*` call is traced:

```python
from raindrop_crewai import setup_crewai

raindrop = setup_crewai(api_key="your-write-key", user_id="user-123")
# Every Crew.kickoff(...) is now traced automatically.
```

### Wrap a specific crew

Per-instance wrap with no global side effects:

```python
raindrop = RaindropCrewAI(api_key="your-write-key", user_id="user-123")
wrapped = raindrop.wrap(crew)
result = wrapped.kickoff(inputs={"topic": "..."})
```

## Debug Mode

Enable verbose logging with `debug=True`:

```python
raindrop = RaindropCrewAI(api_key="your-write-key", debug=True)
```

This sets the `raindrop_crewai` logger to `DEBUG`, surfacing telemetry-side failures that are otherwise swallowed (so the user's pipeline never crashes due to instrumentation).

## Identify Users

Associate events with a user identity after initialization:

```python
raindrop.identify("user-123", {"name": "Alice", "plan": "pro"})
```

## Track Signals

Attach feedback, edits, or other custom signals to a previously-shipped event by its `event_id`. The Python SDK does not currently expose a `lastEventId` accessor like the TypeScript client does, so to attach a signal you need either (a) the `event_id` returned to your application out-of-band (e.g. logged by `debug=True`), or (b) the public ID surfaced on the dashboard's event detail page.

```python
raindrop.track_signal(
    event_id="<event-id-from-dashboard-or-debug-log>",
    name="thumbs_up",
    signal_type="feedback",
    sentiment="POSITIVE",
    comment="Great answer!",
)
```

## Tracing

When `tracing_enabled=True` (the default), the integration activates the `opentelemetry-instrumentation-crewai` instrumentor via `traceloop-sdk`, producing nested OTel spans for every crew execution:

- **Crew workflow** — root span covering the entire `kickoff()` call (`crewai.workflow`)
- **Agent execution** — child span per agent (`{role}.agent`)
- **Task execution** — child span per task (`{name}.task`)
- **LLM calls** — leaf spans for each underlying LLM call, with model name and token usage

Trace spans land in the dashboard's Traces tab after async ingestion (typically 30–120s).

To ship flat events only without OTel spans:

```python
raindrop = RaindropCrewAI(api_key="your-write-key", tracing_enabled=False)
```

## Async Usage

```python
import asyncio

async def main():
    result = await crew.kickoff_async(inputs={"topic": "AI safety"})
    print(result.raw)

    results = await crew.kickoff_for_each_async(
        inputs=[{"topic": "AI"}, {"topic": "ML"}]
    )

asyncio.run(main())
```

## Flushing and Shutdown

```python
raindrop.flush()     # flush pending data
raindrop.shutdown()  # flush + release resources (call before process exit)
```

## API Reference

### `RaindropCrewAI`

| Parameter | Type | Default | Description |
|---|---|---|---|
| `api_key` | `Optional[str]` | `None` | Raindrop API key. If `None`, telemetry shipping is disabled (a `UserWarning` is issued). |
| `user_id` | `Optional[str]` | `None` | Default user identifier for all events. |
| `convo_id` | `Optional[str]` | `None` | Conversation/thread ID to group related events. |
| `tracing_enabled` | `bool` | `True` | Enable distributed tracing via the OTel CrewAI instrumentor. |
| `bypass_otel_for_tools` | `bool` | `True` | Forwarded to `raindrop.init()`. Controls whether tool spans on the underlying SDK skip the OTLP path. The CrewAI integration itself does not call `interaction.track_tool()`, so this flag has no observable effect on CrewAI events — exposed for SDK API parity. |
| `debug` | `bool` | `False` | Enable debug logging. |

### Methods

| Method | Description |
|---|---|
| `setup()` | Monkey-patch `Crew.kickoff*` so every kickoff is traced (one-time, idempotent). |
| `wrap(crew)` | Per-instance wrap of a single `Crew` (no global side effects). Returns the wrapped crew. |
| `flush()` | Flush all pending events to the Raindrop API. |
| `shutdown()` | Flush remaining events and release resources. |
| `identify(user_id, traits)` | Identify a user with optional traits (`Dict[str, str \| int \| bool \| float]`). |
| `track_signal(event_id, name, ...)` | Track a signal event. See `track_signal` signature in `wrapper.py` for the full keyword-only parameters. |

### Module-level helpers

- `create_raindrop_crewai(...)` — construct a `RaindropCrewAI` and return a no-op instance on failure (never crashes).
- `setup_crewai(...)` — convenience: construct a `RaindropCrewAI` and call `setup()` in one step.

## Known Limitations

- **No per-tool spans / empty `event.toolCalls`** — the underlying `opentelemetry-instrumentation-crewai` instrumentor only wraps `Crew.kickoff` (workflow), `Agent.execute_task` (`.agent`), `Task.execute_sync` (`.task`), and `LLM.call` (LLM generation). It does NOT produce a span per tool invocation, so `event.toolCalls` on the dashboard will be empty for CrewAI runs. Tool usage surfaces only in the agent's final output text and indirectly in the agent/task span durations.
- **Streaming** — CrewAI's `CrewStreamingOutput` is returned to the caller as-is; the flat Raindrop event is shipped after the stream completes.
- **`finish_reason` is per-LLM, not per-crew** — `CrewOutput` does not expose `finish_reason`. Per-LLM finish reasons appear on the LLM trace spans inside the workflow trace, not on the flat event.
- **Python SDK feature surface** — the Python SDK is module-level and does not expose `EventShipper` / `TraceShipper` classes. The `identify()` and `track_signal()` methods are pass-through wrappers around the `raindrop.analytics.*` module functions.
- **wrapt < 2 required for tracing** — the OTel CrewAI instrumentor uses `wrapt.wrap_function_wrapper(..., module=...)` which was removed in wrapt 2.0. The package's dev dependencies pin `wrapt<2`; production environments using wrapt 2.x will see no trace spans (the flat event still ships).

## Testing

```bash
cd packages/crewai-python
pip install -e ".[crewai,dev]"

# All tests. Unit tests run unconditionally; e2e tests skip gracefully
# unless RAINDROP_WRITE_KEY + OPENAI_API_KEY + RAINDROP_DASHBOARD_TOKEN
# are all set (this is what CI does).
python -m pytest tests/ -v

# End-to-end against the live Raindrop backend (manual run, requires
# all three env vars). Get RAINDROP_DASHBOARD_TOKEN from app.raindrop.ai
# DevTools → Network → any backend.raindrop.ai request → Authorization
# header (token expires every ~30 min).
RAINDROP_WRITE_KEY=your-write-key \
  RAINDROP_DASHBOARD_TOKEN=eyJ... \
  OPENAI_API_KEY=sk-... \
  python -m pytest tests/test_e2e.py -v
```

## Documentation

Full documentation: [Raindrop CrewAI Integration](https://docs.raindrop.ai/integrations/crewai).
