Metadata-Version: 2.4
Name: apflow
Version: 0.18.2
Summary: Agent workflow orchestration and execution platform
Author-email: aiperceivable <tercel.yi@gmail.com>
License: Apache-2.0
Project-URL: Homepage, https://aiperceivable.com
Project-URL: Source, https://github.com/aiperceivable/apflow
Keywords: ai,agent,orchestration,workflow,task,crewai,a2a
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: pydantic[email]>=2.0.0
Requires-Dist: pydantic-settings>=2.0.0
Requires-Dist: sqlalchemy>=2.0.0
Requires-Dist: sqlalchemy-session-proxy>=0.1.0
Requires-Dist: alembic>=1.13.0
Requires-Dist: duckdb-engine>=0.10.0
Requires-Dist: pytz>=2024.1
Provides-Extra: a2a
Requires-Dist: fastapi>=0.115.0; extra == "a2a"
Requires-Dist: uvicorn[standard]>=0.29.0; extra == "a2a"
Requires-Dist: a2a-sdk[http-server]>=0.3.22; extra == "a2a"
Requires-Dist: httpx[socks]>=0.27.0; extra == "a2a"
Requires-Dist: aiohttp[speedups]>=3.9.0; extra == "a2a"
Requires-Dist: starlette>=0.27.0; extra == "a2a"
Requires-Dist: websockets>=12.0; extra == "a2a"
Requires-Dist: python-jose[cryptography]>=3.3.0; extra == "a2a"
Provides-Extra: cli
Requires-Dist: click>=8.0.0; extra == "cli"
Requires-Dist: rich>=13.0.0; extra == "cli"
Requires-Dist: typer>=0.9.0; extra == "cli"
Requires-Dist: python-dotenv>=1.0.0; extra == "cli"
Requires-Dist: nest_asyncio>=1.5.0; extra == "cli"
Requires-Dist: httpx>=0.27.0; extra == "cli"
Requires-Dist: PyJWT>=2.8.0; extra == "cli"
Requires-Dist: pyyaml>=6.0.0; extra == "cli"
Provides-Extra: docs
Requires-Dist: mkdocs>=1.6.0; extra == "docs"
Requires-Dist: mkdocs-material>=9.5.0; extra == "docs"
Requires-Dist: mkdocs-mermaid2-plugin>=1.0.0; extra == "docs"
Requires-Dist: mkdocs-minify-plugin>=0.7.0; extra == "docs"
Requires-Dist: pymdown-extensions>=10.0.0; extra == "docs"
Provides-Extra: postgres
Requires-Dist: asyncpg>=0.29.0; extra == "postgres"
Requires-Dist: psycopg2-binary>=2.9.9; extra == "postgres"
Requires-Dist: greenlet>=3.0.0; extra == "postgres"
Provides-Extra: crewai
Requires-Dist: crewai[tools]>=1.7.2; extra == "crewai"
Requires-Dist: litellm>=1.0.0; extra == "crewai"
Requires-Dist: anthropic>=0.34.0; extra == "crewai"
Requires-Dist: aiodns>=3.6.1; extra == "crewai"
Provides-Extra: llm-key-config
Provides-Extra: scheduling
Requires-Dist: croniter>=1.0.0; extra == "scheduling"
Provides-Extra: email
Requires-Dist: aiosmtplib>=3.0.0; extra == "email"
Provides-Extra: ssh
Requires-Dist: asyncssh>=2.14.0; extra == "ssh"
Provides-Extra: docker
Requires-Dist: docker>=7.0.0; extra == "docker"
Provides-Extra: grpc
Requires-Dist: grpclib>=0.4.7; extra == "grpc"
Requires-Dist: protobuf>=4.25.0; extra == "grpc"
Provides-Extra: graphql
Requires-Dist: strawberry-graphql>=0.220.0; extra == "graphql"
Requires-Dist: fastapi>=0.115.0; extra == "graphql"
Requires-Dist: starlette>=0.27.0; extra == "graphql"
Requires-Dist: uvicorn[standard]>=0.29.0; extra == "graphql"
Provides-Extra: mcp
Provides-Extra: llm
Requires-Dist: litellm>=1.0.0; extra == "llm"
Provides-Extra: tools
Requires-Dist: requests>=2.31.0; extra == "tools"
Requires-Dist: beautifulsoup4>=4.12.0; extra == "tools"
Requires-Dist: trafilatura>=2.0.0; extra == "tools"
Requires-Dist: brotli==1.2.0; extra == "tools"
Requires-Dist: bs4>=0.0.2; extra == "tools"
Provides-Extra: standard
Requires-Dist: apflow[a2a,cli,crewai,llm,scheduling,tools]; extra == "standard"
Provides-Extra: dev
Requires-Dist: pytest>=7.0.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"
Requires-Dist: pytest-cov>=4.0.0; extra == "dev"
Requires-Dist: pytest-timeout>=2.1.0; extra == "dev"
Requires-Dist: black>=23.0.0; extra == "dev"
Requires-Dist: ruff>=0.1.0; extra == "dev"
Requires-Dist: mypy>=1.5.0; extra == "dev"
Requires-Dist: build>=1.0.0; extra == "dev"
Requires-Dist: twine>=4.0.0; extra == "dev"
Requires-Dist: jsonfinder>=0.4.0; extra == "dev"
Requires-Dist: pre-commit>=3.0.0; extra == "dev"
Requires-Dist: apdev[dev]>=0.1.6; extra == "dev"
Requires-Dist: memory-profiler>=0.61.0; extra == "dev"
Requires-Dist: psutil>=5.9.0; extra == "dev"
Provides-Extra: all
Requires-Dist: apflow[a2a,cli,crewai,docker,email,graphql,grpc,llm,llm-key-config,mcp,postgres,scheduling,ssh,tools]; extra == "all"
Dynamic: license-file

# apflow

<p align="center">
  <img src="apflow-logo.svg" alt="apflow Logo" width="128" height="128" />
</p>

**Distributed Task Orchestration Framework**

apflow is a distributed task orchestration framework that scales from a single process to multi-node clusters. It provides a unified execution interface with 12+ built-in executors, A2A protocol support, and automatic leader election with failover.

Start standalone in 30 seconds. Scale to distributed clusters without code changes.

## Deployment Modes

### Standalone (Development and Small Workloads)

```bash
pip install apflow
```

Single process, DuckDB storage, zero configuration. Ideal for development, testing, and small-scale automation.

```python
from apflow.core.builders import TaskBuilder
from apflow import TaskManager, create_session

db = create_session()
task_manager = TaskManager(db)
result = await (
    TaskBuilder(task_manager, "rest_executor")
    .with_name("fetch_data")
    .with_input("url", "https://api.example.com/data")
    .with_input("method", "GET")
    .execute()
)
```

### Distributed Cluster (Production)

```bash
pip install apflow[standard]
```

PostgreSQL-backed, leader/worker topology with automatic leader election, task leasing, and horizontal scaling. Same application code -- only the runtime environment changes.

```bash
# Leader node
APFLOW_CLUSTER_ENABLED=true \
APFLOW_DATABASE_URL=postgresql+asyncpg://user:pass@db:5432/apflow \
APFLOW_NODE_ROLE=auto \
apflow serve --port 8000

# Worker node (on additional machines)
APFLOW_CLUSTER_ENABLED=true \
APFLOW_DATABASE_URL=postgresql+asyncpg://user:pass@db:5432/apflow \
APFLOW_NODE_ROLE=worker \
apflow serve --port 8001
```

Add worker nodes at any time. The cluster auto-discovers them via the shared database.

## Installation Options

| Extra | Command | Includes |
|-------|---------|----------|
| Core | `pip install apflow` | Task orchestration, DuckDB storage, core executors |
| Standard | `pip install apflow[standard]` | Core + A2A server + CLI + CrewAI + LLM + tools |
| A2A Server | `pip install apflow[a2a]` | A2A Protocol server (HTTP/SSE/WebSocket) |
| CLI | `pip install apflow[cli]` | Command-line interface |
| PostgreSQL | `pip install apflow[postgres]` | PostgreSQL storage (required for distributed) |
| CrewAI | `pip install apflow[crewai]` | LLM-based agent crews |
| LLM | `pip install apflow[llm]` | Direct LLM via LiteLLM (100+ providers) |
| SSH | `pip install apflow[ssh]` | Remote command execution |
| Docker | `pip install apflow[docker]` | Containerized execution |
| gRPC | `pip install apflow[grpc]` | gRPC service calls |
| Email | `pip install apflow[email]` | Email sending (SMTP) |
| All | `pip install apflow[all]` | Everything |

## Built-in Executors

| Executor | Purpose | Extra |
|----------|---------|-------|
| RestExecutor | HTTP/REST API calls with auth and retry | core |
| CommandExecutor | Local shell command execution | core |
| SystemInfoExecutor | System information collection | core |
| ScrapeExecutor | Web page scraping | core |
| WebSocketExecutor | Bidirectional WebSocket communication | core |
| McpExecutor | Model Context Protocol tools and data sources | core |
| ApFlowApiExecutor | Inter-instance API calls for distributed execution | core |
| AggregateResultsExecutor | Aggregate results from multiple tasks | core |
| SshExecutor | Remote command execution via SSH | [ssh] |
| DockerExecutor | Containerized command execution | [docker] |
| GrpcExecutor | gRPC service calls | [grpc] |
| SendEmailExecutor | Send emails via SMTP or Resend API | [email] |
| CrewaiExecutor | LLM agent crews via CrewAI | [crewai] |
| BatchCrewaiExecutor | Atomic batch of multiple crews | [crewai] |
| LLMExecutor | Direct LLM interaction via LiteLLM | [llm] |
| GenerateExecutor | Natural language to task tree via LLM | [llm] |

## Architecture

```
                    +--------------------------+
                    |    Client / CLI / API     |
                    +------------+-------------+
                                 |
              +------------------+------------------+
              |                  |                   |
    +---------v--------+ +------v------+ +----------v--------+
    |   Leader Node     | | Worker Node | |   Worker Node      |
    |  (auto-elected)   | |             | |                    |
    |  - Task placement | |  - Execute  | |  - Execute         |
    |  - Lease mgmt     | |  - Heartbeat| |  - Heartbeat       |
    |  - Execute        | |             | |                    |
    +---------+--------+ +------+------+ +----------+--------+
              |                  |                   |
              +------------------+------------------+
                                 |
                    +------------v-------------+
                    |  PostgreSQL (shared)      |
                    |  - Task state             |
                    |  - Leader lease           |
                    |  - Node registry          |
                    +--------------------------+
```

*Standalone mode uses the same architecture with a single node and DuckDB storage.*

## Documentation

- [Getting Started](docs/getting-started/quick-start.md) -- Up and running in 10 minutes
- [Distributed Cluster Guide](docs/guides/distributed-cluster.md) -- Multi-node deployment
- [Executor Selection Guide](docs/guides/executor-selection.md) -- Choose the right executor
- [API Reference](docs/api/python.md) -- Python API documentation
- [Architecture Overview](docs/architecture/overview.md) -- Design and internals
- [Protocol Specification](docs/protocol/index.md) -- A2A Protocol spec

Full documentation: [flow-docs.aiperceivable.com](https://flow-docs.aiperceivable.com)

## Contributing

Contributions are welcome. See [Contributing Guide](docs/development/contributing.md) for setup and guidelines.

## License

Apache-2.0

## Links

- **Documentation**: [flow-docs.aiperceivable.com](https://flow-docs.aiperceivable.com)
- **Website**: [aiperceivable.com](https://aiperceivable.com)
- **GitHub**: [aiperceivable/apflow](https://github.com/aiperceivable/apflow)
- **PyPI**: [apflow](https://pypi.org/project/apflow/)
