Metadata-Version: 2.4
Name: motosan-agent-workflow
Version: 0.3.0
Summary: Pure Python implementation of the motosan-agent-workflow SDK
Author: motosan-dev
License: MIT
Project-URL: Repository, https://github.com/motosan-dev/motosan-agent-workflow
Project-URL: Homepage, https://github.com/motosan-dev/motosan-agent-workflow
Project-URL: Changelog, https://github.com/motosan-dev/motosan-agent-workflow/blob/main/CHANGELOG.md
Keywords: agent,workflow,dag,llm,ai
Classifier: Programming Language :: Python :: 3
Classifier: License :: OSI Approved :: MIT License
Classifier: Topic :: Software Development :: Libraries
Requires-Python: >=3.10
Description-Content-Type: text/markdown
Provides-Extra: yaml
Requires-Dist: pyyaml>=6.0; extra == "yaml"
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.21; extra == "dev"
Requires-Dist: pyyaml>=6.0; extra == "dev"

# motosan-agent-workflow (Python)

A general-purpose **DAG-based agent workflow engine** for orchestrating LLM agents.

[![PyPI](https://img.shields.io/pypi/v/motosan-agent-workflow)](https://pypi.org/project/motosan-agent-workflow/)
[![License: MIT](https://img.shields.io/badge/License-MIT-blue.svg)](../../LICENSE)

## Installation

```bash
pip install motosan-agent-workflow

# Optional YAML support:
pip install "motosan-agent-workflow[yaml]"
```

## Quick Start

```python
import asyncio
from motosan_agent_workflow import (
    Workflow, Node, Runtime, LlmClient, LlmResponse,
)


class MyLlm(LlmClient):
    async def call(self, system_prompt: str, input_data: dict, tools: list[str]) -> LlmResponse:
        return LlmResponse(content={"answer": "..."}, input_tokens=100, output_tokens=50)


async def main():
    workflow = (
        Workflow.builder("my-pipeline")
        .node(Node.agent("researcher", system_prompt="Find key facts about the topic."))
        .node(Node.agent("writer", system_prompt="Write a report.", input_from=["researcher"]))
        .edge("researcher", "writer")
        .build()
    )

    runtime = Runtime(MyLlm())
    result = await runtime.execute(workflow, {"topic": "Python in 2026"})
    print(result.node_output("writer").content)


asyncio.run(main())
```

## YAML Workflow

```python
from motosan_agent_workflow import load_workflow, load_workflow_from_str, Format

# From file
workflow = load_workflow("my-workflow.yaml")

# From string
workflow = load_workflow_from_str(yaml_str, Format.YAML)
```

## Node Types

| Type | Description | LLM |
|------|-------------|-----|
| `Node.agent()` | LLM-powered agent | Yes |
| `Node.human()` | Human-in-the-loop gate | No |
| `Node.transform()` | Pure function transform | No |
| `Node.condition()` | Conditional branching | No |
| `Node.loop_node()` | Iterative loop | Depends |
| `Node.sub_workflow()` | Nested sub-workflow | Depends |

## Features

- **Parallel execution** — Nodes in the same DAG layer run concurrently via `asyncio.gather`
- **Output schema validation** — JSON Schema with auto self-correction
- **Retry policies** — Per-node exponential backoff with Skip/Abort/Fallback modes
- **Human-in-the-loop** — Gates with timeout and default action
- **Event streaming** — `WorkflowEvent` via async callback
- **Token tracking** — Per-node and total, with budget enforcement
- **Cost estimation** — Pre-execution cost estimates with configurable pricing
- **YAML/JSON loader** — `load_workflow()` and `load_workflow_from_str()`
- **8 built-in templates** — code-review, research, brainstorm, report, playlist, content, architecture-review, sprint-planning

## Requirements

- Python >= 3.10
- Optional: `pyyaml >= 6.0` for YAML support

## Publishing

```bash
git tag python-v0.3.0
git push origin python-v0.3.0
# → triggers publish-python.yml → PyPI
```
