Metadata-Version: 2.4
Name: opencrew
Version: 0.1.0
Summary: Run persistent memory agents locally. Deploy in one command.
License-Expression: Apache-2.0
License-File: LICENSE
Requires-Python: >=3.11
Requires-Dist: fastapi>=0.135
Requires-Dist: httpx>=0.28
Requires-Dist: psycopg-pool>=3.3
Requires-Dist: psycopg[binary]>=3.3
Requires-Dist: pydantic>=2.12
Requires-Dist: python-dotenv>=1.2
Requires-Dist: pyyaml>=6.0
Requires-Dist: rich>=13.0
Requires-Dist: typer>=0.15
Requires-Dist: uvicorn[standard]>=0.41
Provides-Extra: composio
Requires-Dist: composio; extra == 'composio'
Description-Content-Type: text/markdown

# OpenCrew

**Run persistent memory agents locally. Deploy in one command.**

OpenCrew gives you a local runtime for building agents that actually remember — powered by [Letta](https://github.com/letta-ai/letta) for persistent memory, with a CLI, Python SDK, and REST API out of the box.

```
CrewAI for one-shot tasks. OpenCrew for agents that actually remember.
```

---

## Quickstart

```bash
pip install opencrew
opencrew init   # guided: [1] full local stack (Docker + Letta) or [2] cloud API + workspace key
opencrew create my-agent
# Edit my-agent/agent.yaml with your persona and tools
opencrew run my-agent "What's the best way to load a CSV in Python?"
opencrew memory my-agent
```

**First-time setup:** `opencrew init` walks through everything the CLI needs — either a **local** stack (Docker, Postgres, Letta, local API, optional Anthropic/OpenAI for Letta) or **remote** (OpenCrew API URL + workspace API key, optional BYO Letta URL/token). Both can exist in `~/.opencrew/config.yaml` under **`local:`** and **`remote:`**. The CLI/SDK uses **`remote`** when `api_url` + `api_key` are set there; otherwise **`local`**.

Non-interactive: `opencrew init --local`, `opencrew init --cloud --api-url https://... --api-key ...`, or env `OPENCREW_API_URL` / `OPENCREW_API_KEY`.

## What You Get

- **Persistent memory** — agents remember across sessions, not just within them
- **Per-client scoping** — give each end-user their own memory-isolated agent instance
- **Config-as-code** — YAML agent configs, git-committable and reproducible
- **Tool plugins** — register custom Python/TypeScript tools or MCP servers
- **Skills system** — SKILL.md knowledge files agents consult on demand
- **Run management** — async runs with SSE streaming, full history, and logs
- **Python SDK** — programmatic access to everything the CLI does
- **REST API** — FastAPI server with OpenAPI docs at `/docs`

---

## Prerequisites

- **Python 3.11+**
- **Docker** (Docker Desktop or Docker Engine)
- **Node.js 20+** (includes npm)
- **LLM API key** (Anthropic or OpenAI)
- **Letta server** reachable at `LETTA_BASE_URL` (see [docs/README.md](docs/README.md)) — this is **separate** from OpenCrew’s catalog Postgres (agent types, runs, API keys).

---

## Installation

```bash
pip install opencrew
```

This installs both the `opencrew` CLI and the `from opencrew import OpenCrew` Python SDK.

---

## Setup

```bash
opencrew init
```

This command:
1. Checks that Docker and Node.js are installed
2. Prompts for your LLM API key (or reads from `ANTHROPIC_API_KEY` env var)
3. Starts Postgres and Letta server via Docker Compose
4. Runs database migrations
5. Installs the Letta Code SDK
6. Generates your API key
7. Starts the OpenCrew API server

Everything runs on your local machine. No data leaves your machine except LLM API calls.

---

## CLI Reference

### Lifecycle

```bash
opencrew init            # Set up everything (Docker, DB, Letta, API server)
opencrew start           # Start services (if stopped)
opencrew stop            # Stop services
opencrew destroy         # Remove all data and infrastructure
opencrew doctor          # Check that everything is healthy
```

### Agents

```bash
opencrew create my-agent               # Scaffold agent config YAML
opencrew apply my-agent/agent.yaml     # Sync config to server
opencrew list                          # List all agent types
```

### Running

```bash
opencrew run my-agent "Hello!"                     # Run and stream response
opencrew run my-agent "Hello!" --client=user123    # Per-client memory isolation
opencrew run my-agent "Hello!" --verbose           # Show reasoning + tool calls
opencrew run my-agent "Hello!" --fresh             # Throwaway agent (no memory)
```

### Memory

```bash
opencrew memory my-agent                # Show what the agent remembers
opencrew memory my-agent --client=user123  # Per-client memory
opencrew reset my-agent                 # Wipe memory and start fresh
```

### History

```bash
opencrew logs                  # All recent runs
opencrew logs my-agent         # Runs for a specific agent
opencrew logs --status=failed  # Filter by status
```

---

## Agent Config (YAML)

```yaml
name: my-agent
slug: my-agent
description: A helpful coding assistant
model: anthropic/claude-sonnet-4-20250514

persona: |
  You are a Python expert specialized in data science.
  You prefer clean, well-documented code.

memory:
  rules: "Always use type hints. Prefer pandas over raw loops."
  project: "Building a REST API with FastAPI"

tools:
  - name: run-python
    slug: run-python
    source_type: python
    source_code: |
      def run_python(code: str) -> str:
          """Execute Python code and return output."""
          import subprocess
          result = subprocess.run(["python3", "-c", code],
                                  capture_output=True, text=True, timeout=30)
          return result.stdout or result.stderr

mcp_servers:
  - name: filesystem
    slug: filesystem
    server_url: https://your-mcp-server.example.com/sse

skills:
  - slug: python-best-practices
    path: ./skills/python-best-practices/SKILL.md
```

---

## Python SDK

```python
from opencrew import OpenCrew

client = OpenCrew(api_key="<your-workspace-api-key>")

# Create an agent type
client.agent_types.create(
    name="my-agent",
    slug="my-agent",
    persona="You are a Python expert.",
    model="anthropic/claude-sonnet-4-20250514",
    memory_defaults={"rules": "Always use type hints."},
)

# Trigger a run and stream results
run = client.runs.create(agent_type="my-agent", message="What's the best way to load a CSV?")
for event in client.runs.stream(run["id"]):
    if event.type == "assistant":
        print(event.content, end="")

# Check memory
for block in client.instances.memory("my-agent"):
    print(f"{block['label']}: {block['value']}")
```

### Async

```python
from opencrew import AsyncOpenCrew

client = AsyncOpenCrew(api_key="<your-workspace-api-key>")
run = await client.runs.create(agent_type="my-agent", message="hello")
async for event in client.runs.stream(run["id"]):
    print(event.content, end="")
```

### SDK-Only Mode

The SDK works with any OpenCrew server — local or remote:

```python
# Local server (default after opencrew init)
client = OpenCrew(api_key="<your-workspace-api-key>", base_url="http://localhost:8741")

# Remote / cloud server
client = OpenCrew(api_key="<your-workspace-api-key>", base_url="https://api.example.com")
```

---

## Architecture

```
Your Machine
├── CLI / Python SDK
│   └── HTTP → FastAPI server (localhost:8741)
│               └── ThreadPoolExecutor spawns subprocess
│                   └── tsx runs TypeScript script
│                       └── Letta Code SDK
│                           └── HTTP → Letta Server (localhost:8283)
│                                       └── Letta's persistence (often Postgres)
```

All services run locally via Docker Compose:
- **Postgres** (pgvector) — may hold **OpenCrew catalog** tables (runs, keys, agent types) and/or be used by Letta depending on your compose file; treat **control plane data** and **Letta agent state** as logically separate for backups and migration.
- **Letta Server** — manages agent memory, conversations, and tool execution
- **FastAPI Server** — orchestrates runs, manages config, serves the API

The only external traffic is LLM API calls (Anthropic/OpenAI).

---

## API Docs

After `opencrew init`, browse to [http://localhost:8741/docs](http://localhost:8741/docs) for the full OpenAPI documentation.

---

## License

Apache-2.0
