Metadata-Version: 2.4
Name: arccrew
Version: 0.1.2
Summary: Framework for building multi-agent LangGraph pipelines with Claude Code skills
Project-URL: Homepage, https://github.com/amonrreal/arccrew
Project-URL: Repository, https://github.com/amonrreal/arccrew
Project-URL: Issues, https://github.com/amonrreal/arccrew/issues
Author-email: Alvaro Monrreal <alvaro.monrreal@gmail.com>
License: MIT
License-File: LICENSE
Keywords: agents,langchain,langgraph,llm,multi-agent,pipeline
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Requires-Dist: click>=8.0.0
Requires-Dist: ddgs>=6.0.0
Requires-Dist: fastapi>=0.115.0
Requires-Dist: langchain-anthropic>=0.3.0
Requires-Dist: langchain-community>=0.3.0
Requires-Dist: langchain-core>=0.3.0
Requires-Dist: langchain>=0.3.0
Requires-Dist: langgraph-supervisor>=0.0.1
Requires-Dist: langgraph>=0.2.0
Requires-Dist: pydantic-settings>=2.0.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: python-jose[cryptography]>=3.3.0
Requires-Dist: uvicorn[standard]>=0.32.0
Provides-Extra: dev
Requires-Dist: hatch>=1.13.0; extra == 'dev'
Requires-Dist: pre-commit>=4.0.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.24.0; extra == 'dev'
Requires-Dist: pytest>=8.0.0; extra == 'dev'
Requires-Dist: ruff>=0.8.0; extra == 'dev'
Provides-Extra: mcp
Requires-Dist: mcp>=1.0.0; extra == 'mcp'
Provides-Extra: otel
Requires-Dist: opentelemetry-api>=1.20; extra == 'otel'
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc>=1.20; extra == 'otel'
Requires-Dist: opentelemetry-sdk>=1.20; extra == 'otel'
Description-Content-Type: text/markdown

# ArcCrew

[![PyPI version](https://img.shields.io/pypi/v/arccrew)](https://pypi.org/project/arccrew/)
[![Python](https://img.shields.io/pypi/pyversions/arccrew)](https://pypi.org/project/arccrew/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

**Multi-agent LangGraph pipelines — scaffold, build, and ship in minutes.**

ArcCrew is a Python framework for building production-ready multi-agent pipelines on LangGraph. It ships with a CLI, a 3-layer prompt system, a FastAPI server, an MCP server, and AI coding skills that let you generate entire pipelines from a description.

---

## Install

```bash
pip install arccrew
```

## Quick start

```bash
# 1. Scaffold a new project
arccrew init my-project
cd my-project

# 2. Set your API key
cp .env.example .env
# edit .env → set ANTHROPIC_API_KEY

# 3. Verify everything is configured
arccrew check

# 4. Open in Claude Code and run:
# /build-agents  ← describe your pipeline, get all files generated
```

Or build manually:

```python
# agents/researcher.py
from arccrew import BaseAgent, track_timing
from arccrew.tools import get_research_tools
from langgraph.types import Command
from pathlib import Path

class ResearcherAgent(BaseAgent):
    def __init__(self):
        super().__init__(name="researcher", prompts_dir=Path("prompts"))

    @property
    def system_prompt(self) -> str:
        return self.get_prompt_manager().assemble_prompt("researcher")

    @track_timing
    async def execute(self, state: dict) -> Command:
        task = state["tasks"][state["current_task_index"]]["description"]
        result = await self.run_react(task=task, tools=get_research_tools())
        return Command(goto="writer", update={"context": self.extract_json(result)})
```

```python
# pipeline.py
from arccrew import create_pipeline, PipelineState
from arccrew.api.deps import pipeline_registry
from arccrew.mcp_server import register_pipeline
from agents.researcher import ResearcherAgent

def create_my_pipeline():
    researcher = ResearcherAgent()
    return create_pipeline(
        state_class=PipelineState,
        nodes={"researcher": lambda s: researcher.execute(s)},
        flow=["researcher"],
    )

pipeline_registry.register("my_pipeline", create_my_pipeline)
register_pipeline("my_pipeline", create_my_pipeline)
```

```bash
arccrew serve       # REST API on :8000
arccrew serve-mcp   # MCP server for Claude Desktop and other MCP clients
```

---

## Features

- **3-layer prompt system** — library base + your project globals + per-agent role
- **18 AI coding skills** — pre-installed in every scaffolded project, work natively in Claude Code
- **Built-in tools** — web search, file management, shell execution, ready to plug into any agent
- **FastAPI server** — REST + SSE streaming + WebSocket out of the box
- **MCP server** — expose your pipelines as tools in any MCP-compatible client
- **Multi-provider** — Claude by default, any LangChain-supported provider via env var
- **Supervisor pattern** — LLM-driven routing as an alternative to manual graph wiring
- **Retry / verification loops** — built-in worker → verifier → retry pattern

---

## Prompt layers

Every agent's system prompt is assembled in this order:

| Layer | File | Who controls it |
|---|---|---|
| Base | bundled in arccrew | library — universal agent rules |
| Global | `prompts/global.md` | you — project-wide rules (tone, domain, language) |
| Agent | `prompts/{agent}.md` | you — role, tools, output schema |

---

## Built-in tools

Every agent has access to these tools out of the box:

```python
from arccrew.tools import get_research_tools, create_workspace_tools

# Research tools — for agents that need to look things up
tools = get_research_tools()
# Includes: web_search (DuckDuckGo)

# Workspace tools — for agents that read/write files or run commands
from pathlib import Path
tools = create_workspace_tools(Path("workspace"))
# Includes: write_file, read_file, list_files, run_shell
```

Use them in any agent:

```python
result = await self.run_react(task=task, tools=get_research_tools())
result = await self.run_react(task=task, tools=create_workspace_tools(Path("workspace")))

# Combine both
result = await self.run_react(
    task=task,
    tools=get_research_tools() + create_workspace_tools(Path("workspace"))
)
```

---

## Adding your own tools

Create a file in `tools/` named after your domain:

```python
# tools/calendar_tools.py
from langchain_core.tools import tool

@tool
async def get_availability(date: str) -> str:
    """Check calendar availability for a given date (YYYY-MM-DD).

    Use for: tasks that require checking free/busy slots.
    Do NOT use for: general research (use web_search instead).

    Args:
        date: Date to check in YYYY-MM-DD format.

    Returns:
        Available time slots as a formatted string.
    """
    try:
        # your implementation here
        return f"Available slots for {date}: 9am, 2pm, 4pm"
    except Exception as e:
        return f"ERROR: {e}"

def get_calendar_tools() -> list:
    return [get_availability]
```

Combine with arccrew built-ins in any agent:

```python
from arccrew.tools import get_research_tools
from tools.calendar_tools import get_calendar_tools

result = await self.run_react(
    task=task,
    tools=get_research_tools() + get_calendar_tools(),
)
```

Use `/add-tool` in Claude Code to generate a new tool from a description.

---

## Environment variables

All configuration lives in `.env`. Copy `.env.example` after scaffolding:

```bash
# LLM provider (pick one)
ANTHROPIC_API_KEY=your_key_here
OPENAI_API_KEY=your_key_here
GROQ_API_KEY=your_key_here
GOOGLE_API_KEY=your_key_here

# Model (default for all agents)
AGENT_MODEL=anthropic/claude-haiku-4-5-20251001

# Per-agent overrides
RESEARCHER_MODEL=anthropic/claude-sonnet-4-6
RESEARCHER_MAX_ROUNDS=10
WRITER_MAX_ROUNDS=5

# Workspace (where agents write files)
WORKSPACE_DIR=workspace

# API server
API_HOST=0.0.0.0
API_PORT=8000
API_AUTH_ENABLED=false   # set true + API_SECRET_KEY for production

# Observability (optional)
LANGSMITH_API_KEY=your_key_here
LANGSMITH_PROJECT=my-project
LANGSMITH_TRACING=true
```

---

## Skills

Every project created with `arccrew init` gets 18 skills pre-installed in `.claude/commands/` and detailed references in `skills/`. They work natively in **Claude Code** as slash commands.

After upgrading arccrew, run `arccrew sync-skills` to get new and updated skills without touching your project files.

### Core
| Skill | What it does |
|---|---|
| `/build-agents` | Generate a full pipeline from a description |
| `/add-agent` | Add a single agent to an existing pipeline |
| `/add-tool` | Add a tool to an agent |
| `/add-state-field` | Add a custom state field with the right reducer |
| `/add-prompt` | Add or update an agent prompt |

### Patterns
| Skill | What it does |
|---|---|
| `/add-retry-loop` | Add retry + verification loop |
| `/add-review-gate` | Add a human-in-the-loop review gate |
| `/add-supervisor` | Add LLM-driven supervisor orchestration |

### Infrastructure
| Skill | What it does |
|---|---|
| `/add-api-endpoint` | Add a REST endpoint to the FastAPI server |
| `/add-mcp-pipeline` | Register a pipeline as an MCP tool |

### Configuration
| Skill | What it does |
|---|---|
| `/configure-claude` | Configure Claude as LLM provider |
| `/configure-openai` | Configure OpenAI as LLM provider |
| `/configure-gemini` | Configure Gemini as LLM provider |
| `/switch-provider` | Switch between LLM providers |

### Quality
| Skill | What it does |
|---|---|
| `/enable-langsmith` | Set up LangSmith tracing |
| `/enable-otel` | Set up OpenTelemetry (Grafana, Datadog, Jaeger…) |
| `/debug-pipeline` | Diagnose pipeline errors |
| `/write-tests` | Generate tests for agents and tools |

---

## CLI

```bash
arccrew init <name>      # scaffold a new project
arccrew check            # verify config and dependencies
arccrew sync-skills      # update skills after upgrading
arccrew serve            # start FastAPI server
arccrew serve-mcp        # start MCP server (stdio)
```

---

## MCP server

Expose your pipelines as tools in any MCP-compatible client (Claude Desktop, Claude Code):

```bash
arccrew serve-mcp   # local stdio transport
arccrew serve       # also serves /mcp for remote HTTP transport
```

Remote connection (after deploying your project):
```json
{
  "mcpServers": {
    "my-project": {
      "url": "https://your-deployment-url/mcp"
    }
  }
}
```

---

## Utility helpers

```python
from arccrew.utils.helpers import truncate_text, extract_json_safe, slugify

# Truncate long LLM output before storing
short = truncate_text(long_response, max_chars=4000)

# Safely extract JSON from any LLM response
data = extract_json_safe(response, fallback={})

# Generate URL-safe slugs
slug = slugify("My Agent Result — 2026")  # "my-agent-result-2026"
```

---

## Example

See [`examples/researcher_writer/`](examples/researcher_writer/) for a complete working pipeline with two agents (Researcher + Writer) in both manual graph and supervisor patterns.

```bash
# Manual graph (BaseAgent subclasses)
python -m examples.researcher_writer.pipeline

# Supervisor pattern
python -m examples.researcher_writer.pipeline --supervisor
```

---

## License

ArcCrew is licensed under the [MIT License](LICENSE).
