Metadata-Version: 2.4
Name: arccrew
Version: 0.8.2
Summary: Framework for building multi-agent LangGraph pipelines with Claude Code skills
Project-URL: Homepage, https://github.com/amonrreal/arccrew
Project-URL: Repository, https://github.com/amonrreal/arccrew
Project-URL: Issues, https://github.com/amonrreal/arccrew/issues
Author-email: Alvaro Monrreal <alvaro.monrreal@gmail.com>
License: MIT
License-File: LICENSE
Keywords: agents,langchain,langgraph,llm,multi-agent,pipeline
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Requires-Dist: click>=8.0.0
Requires-Dist: fastapi>=0.115.0
Requires-Dist: httpx>=0.27.0
Requires-Dist: langchain-anthropic>=0.3.0
Requires-Dist: langchain-community>=0.3.0
Requires-Dist: langchain-core>=0.3.0
Requires-Dist: langchain>=0.3.0
Requires-Dist: langgraph-supervisor>=0.0.1
Requires-Dist: langgraph>=0.2.0
Requires-Dist: pydantic-settings>=2.0.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: python-jose[cryptography]>=3.3.0
Requires-Dist: uvicorn[standard]>=0.32.0
Provides-Extra: all
Requires-Dist: google-auth-oauthlib>=1.0.0; extra == 'all'
Requires-Dist: google-auth>=2.0.0; extra == 'all'
Requires-Dist: gspread>=6.0.0; extra == 'all'
Requires-Dist: langchain-mcp-adapters>=0.2.0; extra == 'all'
Requires-Dist: mcp>=1.0.0; extra == 'all'
Requires-Dist: opentelemetry-api>=1.20; extra == 'all'
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc>=1.20; extra == 'all'
Requires-Dist: opentelemetry-sdk>=1.20; extra == 'all'
Provides-Extra: dev
Requires-Dist: hatch>=1.13.0; extra == 'dev'
Requires-Dist: pre-commit>=4.0.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.24.0; extra == 'dev'
Requires-Dist: pytest>=8.0.0; extra == 'dev'
Requires-Dist: ruff>=0.8.0; extra == 'dev'
Provides-Extra: mcp
Requires-Dist: langchain-mcp-adapters>=0.2.0; extra == 'mcp'
Requires-Dist: mcp>=1.0.0; extra == 'mcp'
Provides-Extra: otel
Requires-Dist: opentelemetry-api>=1.20; extra == 'otel'
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc>=1.20; extra == 'otel'
Requires-Dist: opentelemetry-sdk>=1.20; extra == 'otel'
Provides-Extra: sheets
Requires-Dist: google-auth-oauthlib>=1.0.0; extra == 'sheets'
Requires-Dist: google-auth>=2.0.0; extra == 'sheets'
Requires-Dist: gspread>=6.0.0; extra == 'sheets'
Description-Content-Type: text/markdown

# ArcCrew

[![PyPI version](https://img.shields.io/pypi/v/arccrew)](https://pypi.org/project/arccrew/)
[![Python](https://img.shields.io/pypi/pyversions/arccrew)](https://pypi.org/project/arccrew/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

**Multi-agent LangGraph pipelines — scaffold, build, and ship in minutes.**

ArcCrew is a Python framework for building production-ready multi-agent pipelines on LangGraph. It ships with a CLI, a 3-layer prompt system, a FastAPI server, an MCP server, and AI coding skills that let you generate entire pipelines from a description.

---

## Install

```bash
pip install arccrew
```

## Quick start

```bash
# 1. Scaffold a new project
arccrew init my-project
cd my-project

# 2. Set your API key
cp .env.example .env
# edit .env → set ANTHROPIC_API_KEY

# 3. Verify everything is configured
arccrew check

# 4. Open in Claude Code and run:
# /build-agents  ← describe your pipeline, get all files generated
```

Or build manually:

```python
# agents/researcher.py
from arccrew import BaseAgent, track_timing
from arccrew.tools import get_research_tools
from langgraph.types import Command
from pathlib import Path

class ResearcherAgent(BaseAgent):
    def __init__(self):
        super().__init__(name="researcher", prompts_dir=Path("prompts"))

    @property
    def system_prompt(self) -> str:
        return self.get_prompt_manager().assemble_prompt("researcher")

    @track_timing
    async def execute(self, state: dict) -> Command:
        task = state["tasks"][state["current_task_index"]]["description"]
        result = await self.run_react(task=task, tools=get_research_tools())
        return Command(goto="writer", update={"context": self.extract_json(result)})
```

```python
# pipeline.py
from arccrew import create_pipeline, PipelineState
from arccrew.api.deps import pipeline_registry
from arccrew.mcp_server import register_pipeline
from agents.researcher import ResearcherAgent

def create_my_pipeline():
    researcher = ResearcherAgent()
    return create_pipeline(
        state_class=PipelineState,
        nodes={"researcher": lambda s: researcher.execute(s)},
        flow=["researcher"],
    )

pipeline_registry.register("my_pipeline", create_my_pipeline)
register_pipeline("my_pipeline", create_my_pipeline)
```

```bash
arccrew serve       # REST API on :8000
arccrew serve-mcp   # MCP server for Claude Desktop and other MCP clients
```

Once the server is running, open **http://localhost:8000/playground** to explore your pipelines interactively, or call them via API:

```bash
# Simple text input
curl -X POST http://localhost:8000/api/runs \
  -H "Content-Type: application/json" \
  -d '{"pipeline": "my_pipeline", "input": "write a blog post about AI"}'

# Structured input (dict — auto-serialized, keys available as metadata)
curl -X POST http://localhost:8000/api/runs \
  -H "Content-Type: application/json" \
  -d '{"pipeline": "my_pipeline", "input": {"topic": "AI", "tone": "casual", "words": 400}}'

# Poll for results
curl http://localhost:8000/api/runs/{run_id}

# Stream events as the pipeline runs (SSE)
curl -N http://localhost:8000/api/runs/{run_id}/stream
```

---

## Features

- **3-layer prompt system** — library base + your project globals + per-agent role
- **AI coding skills** — pre-installed in every scaffolded project, work natively in Claude Code
- **Built-in tools** — web search (Tavily or DDG Lite), file management, shell execution, MCP server adapters, ready to plug into any agent
- **FastAPI server** — non-blocking REST API (`POST /api/runs` → 202 + `run_id`, poll with `GET /api/runs/{id}`), SSE streaming, WebSocket, interactive playground at `/playground`
- **MCP dual role** — expose your pipelines as MCP tools AND connect external MCP servers (GitHub, Notion, Slack…) as agent tools
- **Multi-provider** — Claude by default; OpenAI, Gemini, Groq, OpenRouter (200+ models), and Ollama (local) via env var
- **Supervisor pattern** — LLM-driven routing as an alternative to manual graph wiring
- **Retry / verification loops** — built-in worker → verifier → retry pattern

---

## Prompt layers

Every agent's system prompt is assembled in this order:

| Layer | File | Who controls it |
|---|---|---|
| Base | bundled in arccrew | library — universal agent rules |
| Global | `prompts/global.md` | you — project-wide rules (tone, domain, language) |
| Agent | `prompts/{agent}.md` | you — role, tools, output schema |

---

## Built-in tools

Every agent has access to these tools out of the box:

```python
from arccrew.tools import get_research_tools, create_workspace_tools, get_mcp_tools

# Research tools — web_search
# Uses Tavily if TAVILY_API_KEY is set (recommended), DDG Lite otherwise (free, no key)
tools = get_research_tools()

# Workspace tools — write_file, read_file, list_files, run_shell
from pathlib import Path
tools = create_workspace_tools(Path("workspace"))

# MCP tools — connect any MCP server (GitHub, Notion, Slack, custom…)
# Requires: pip install "arccrew[mcp]"
import os
tools = await get_mcp_tools({
    "github": {
        "transport": "sse",
        "url": "https://api.githubcopilot.com/mcp/",
        "headers": {"Authorization": f"Bearer {os.getenv('GITHUB_TOKEN')}"},
    }
})
```

Combine freely in any agent:

```python
result = await self.run_react(
    task=task,
    tools=(
        get_research_tools()
        + create_workspace_tools(Path("workspace"))
        + await get_mcp_tools({"github": GITHUB_MCP})
    )
)
```

---

## Adding your own tools

Create a file in `tools/` named after your domain:

```python
# tools/calendar_tools.py
from langchain_core.tools import tool

@tool
async def get_availability(date: str) -> str:
    """Check calendar availability for a given date (YYYY-MM-DD).

    Use for: tasks that require checking free/busy slots.
    Do NOT use for: general research (use web_search instead).

    Args:
        date: Date to check in YYYY-MM-DD format.

    Returns:
        Available time slots as a formatted string.
    """
    try:
        # your implementation here
        return f"Available slots for {date}: 9am, 2pm, 4pm"
    except Exception as e:
        return f"ERROR: {e}"

def get_calendar_tools() -> list:
    return [get_availability]
```

Combine with arccrew built-ins in any agent:

```python
from arccrew.tools import get_research_tools
from tools.calendar_tools import get_calendar_tools

result = await self.run_react(
    task=task,
    tools=get_research_tools() + get_calendar_tools(),
)
```

Use `/add-tool` in Claude Code to generate a new tool from a description.

---

## Environment variables

All configuration lives in `.env`. Copy `.env.example` after scaffolding:

```bash
# LLM provider (pick one or mix via per-agent overrides)
ANTHROPIC_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here
GROQ_API_KEY=your_key_here
GOOGLE_API_KEY=your_key_here
OPENROUTER_API_KEY=sk-or-v1-...   # 200+ models under one key (optional)

# Model (default for all agents)
AGENT_MODEL=anthropic/claude-haiku-4-5-20251001

# OpenRouter — access any model with one key (free tier available)
# AGENT_MODEL=openrouter/meta-llama/llama-3.1-8b-instruct:free

# Ollama — local models, no cost, no data leaves machine
# AGENT_MODEL=ollama/llama3.2   (requires: ollama serve + ollama pull llama3.2)

# Per-agent overrides — mix cheap and expensive models per agent
RESEARCHER_MODEL=anthropic/claude-sonnet-4-6
RESEARCHER_MAX_ROUNDS=10
WRITER_MAX_ROUNDS=5

# Web search backend (optional — DDG Lite is free and works without a key)
TAVILY_API_KEY=tvly-...   # recommended for production (1k free req/month)

# Workspace (where agents write files)
WORKSPACE_DIR=workspace

# API server
API_HOST=0.0.0.0
API_PORT=8000
API_AUTH_ENABLED=false   # set true + API_SECRET_KEY for production

# Observability (optional)
LANGSMITH_API_KEY=your_key_here
LANGSMITH_PROJECT=my-project
LANGSMITH_TRACING=true
```

---

## Skills

Every project created with `arccrew init` gets skills pre-installed in `.claude/commands/` and detailed references in `skills/`. They work natively in **Claude Code** as slash commands.

After upgrading arccrew, run `arccrew sync-skills` to get new and updated skills without touching your project files.

### Core
| Skill | What it does |
|---|---|
| `/build-agents` | Generate a full pipeline from a description |
| `/add-agent` | Add a single agent to an existing pipeline |
| `/add-tool` | Add a tool to an agent |
| `/add-state-field` | Add a custom state field with the right reducer |
| `/add-prompt` | Add or update an agent prompt |

### Patterns
| Skill | What it does |
|---|---|
| `/add-retry-loop` | Add retry + verification loop |
| `/add-review-gate` | Add a human-in-the-loop review gate |
| `/add-supervisor` | Add LLM-driven supervisor orchestration |

### Infrastructure
| Skill | What it does |
|---|---|
| `/add-api-endpoint` | Add a REST endpoint to the FastAPI server |
| `/add-mcp-pipeline` | Register a pipeline as an MCP tool |
| `/add-google-sheets` | Add Google Sheets read/write tools (OAuth2 auth, requires `pip install "arccrew[sheets]"`) |

### Configuration
| Skill | What it does |
|---|---|
| `/configure-claude` | Configure Claude as LLM provider |
| `/configure-openai` | Configure OpenAI as LLM provider |
| `/configure-gemini` | Configure Gemini as LLM provider |
| `/configure-openrouter` | Configure OpenRouter — 200+ models, free tier available |
| `/configure-ollama` | Configure Ollama for local models (no API key, no cost) |
| `/switch-provider` | Switch between LLM providers |
| `/configure-mcp` | Connect external MCP servers as agent tools |

### Quality
| Skill | What it does |
|---|---|
| `/enable-langsmith` | Set up LangSmith tracing |
| `/enable-otel` | Set up OpenTelemetry (Grafana, Datadog, Jaeger…) |
| `/debug-pipeline` | Diagnose pipeline errors |
| `/write-tests` | Generate tests for agents and tools |

---

## CLI

```bash
arccrew init <name>                    # scaffold a new project
arccrew check                          # verify config and dependencies
arccrew sync-skills                    # update skills after upgrading
arccrew run "describe your task"       # run a pipeline from the terminal
arccrew run "task" --pipeline my_name  # run a specific pipeline
arccrew visualize                      # print Mermaid diagram of your graph
arccrew visualize -o graph.png         # save diagram as PNG (requires pyppeteer)
arccrew visualize -o graph.mmd         # save raw Mermaid code
arccrew serve                          # start FastAPI server
arccrew serve-mcp                      # start MCP server (stdio)
```

---

## Local development

To work on arccrew itself and test changes immediately:

```bash
git clone https://github.com/amonrreal/arccrew
cd arccrew

# Install in editable mode — changes take effect without reinstalling
pip install -e ".[dev]"
cp .env.example .env
# Set ANTHROPIC_API_KEY (or another provider) in .env

# Verify the CLI uses your local version
arccrew --help

# Run the built-in example
python -m examples.researcher_writer.pipeline
python -m examples.researcher_writer.pipeline --supervisor

# Scaffold a test project and try the new commands
cd /tmp
arccrew init test-project
cd test-project
cp .env.example .env
# edit .env → set API key

arccrew check
# → now requires a pipeline.py in the current directory ✓

# Run tests
cd /path/to/arccrew
pytest
pytest tests/test_base_agent.py -v
pytest -k "test_name"
```

---

## MCP server

Expose your pipelines as tools in any MCP-compatible client (Claude Desktop, Claude Code):

```bash
arccrew serve-mcp   # local stdio transport
arccrew serve       # also serves /mcp for remote HTTP transport
```

Remote connection (after deploying your project):
```json
{
  "mcpServers": {
    "my-project": {
      "url": "https://your-deployment-url/mcp"
    }
  }
}
```

---

## Utility helpers

```python
from arccrew.utils.helpers import truncate_text, extract_json_safe, slugify

# Truncate long LLM output before storing
short = truncate_text(long_response, max_chars=4000)

# Safely extract JSON from any LLM response
data = extract_json_safe(response, fallback={})

# Generate URL-safe slugs
slug = slugify("My Agent Result — 2026")  # "my-agent-result-2026"
```

---

## Example

See [`examples/researcher_writer/`](examples/researcher_writer/) for a complete working pipeline with two agents (Researcher + Writer) in both manual graph and supervisor patterns.

```bash
# Manual graph (BaseAgent subclasses)
python -m examples.researcher_writer.pipeline

# Supervisor pattern
python -m examples.researcher_writer.pipeline --supervisor
```

---

## License

ArcCrew is licensed under the [MIT License](LICENSE).
