Metadata-Version: 2.4
Name: operonx
Version: 0.7.1
Summary: High-performance workflow engine for AI applications
Project-URL: Homepage, https://github.com/batman1m2001-cyber/Operonx
Project-URL: Documentation, https://batman1m2001-cyber.github.io/Operonx/
Project-URL: Repository, https://github.com/batman1m2001-cyber/Operonx
Project-URL: Changelog, https://github.com/batman1m2001-cyber/Operonx/blob/main/CHANGELOG.md
Project-URL: Issues, https://github.com/batman1m2001-cyber/Operonx/issues
Author: Operon Team
License-Expression: Apache-2.0
License-File: LICENSE
Keywords: ai,async,dag,llm,orchestration,pipeline,workflow
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.10
Requires-Dist: orjson>=3.9
Requires-Dist: pydantic>=2.0
Requires-Dist: pyyaml>=6.0.3
Requires-Dist: rich>=13.0
Provides-Extra: all
Requires-Dist: aiohttp>=3.8; extra == 'all'
Requires-Dist: boto3>=1.28; extra == 'all'
Requires-Dist: fastapi>=0.100; extra == 'all'
Requires-Dist: google-cloud-aiplatform>=1.38; extra == 'all'
Requires-Dist: httpx>=0.24; extra == 'all'
Requires-Dist: langfuse<3.0.0,>=2.6.0; extra == 'all'
Requires-Dist: numpy>=2.2.6; extra == 'all'
Requires-Dist: onnxruntime<1.20,>=1.15; extra == 'all'
Requires-Dist: openai>=1.0; extra == 'all'
Requires-Dist: opentelemetry-api>=1.20.0; extra == 'all'
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc>=1.20.0; extra == 'all'
Requires-Dist: opentelemetry-exporter-otlp-proto-http>=1.20.0; extra == 'all'
Requires-Dist: opentelemetry-sdk>=1.20.0; extra == 'all'
Requires-Dist: requests>=2.32; extra == 'all'
Requires-Dist: tokenizers>=0.13; extra == 'all'
Requires-Dist: uvicorn[standard]>=0.20; extra == 'all'
Requires-Dist: websockets>=11.0; extra == 'all'
Provides-Extra: anthropic
Requires-Dist: httpx>=0.24; extra == 'anthropic'
Requires-Dist: openai>=1.0; extra == 'anthropic'
Provides-Extra: bedrock
Requires-Dist: boto3>=1.28; extra == 'bedrock'
Requires-Dist: openai>=1.0; extra == 'bedrock'
Provides-Extra: dev
Requires-Dist: httpx>=0.24; extra == 'dev'
Requires-Dist: pre-commit; extra == 'dev'
Requires-Dist: pytest-asyncio>=1.3.0; extra == 'dev'
Requires-Dist: pytest-cov>=4.0.0; extra == 'dev'
Requires-Dist: pytest-timeout>=2.4.0; extra == 'dev'
Requires-Dist: pytest>=9.0.2; extra == 'dev'
Requires-Dist: python-dotenv>=1.2.2; extra == 'dev'
Requires-Dist: ruff>=0.1; extra == 'dev'
Provides-Extra: docs
Requires-Dist: mkdocs-material>=9.5; extra == 'docs'
Requires-Dist: mkdocs>=1.6; extra == 'docs'
Requires-Dist: mkdocstrings[python]>=0.27; extra == 'docs'
Provides-Extra: gemini
Requires-Dist: google-cloud-aiplatform>=1.38; extra == 'gemini'
Requires-Dist: openai>=1.0; extra == 'gemini'
Requires-Dist: requests>=2.32; extra == 'gemini'
Provides-Extra: huggingface
Requires-Dist: numpy>=2.2.6; extra == 'huggingface'
Requires-Dist: torch>=2.0; extra == 'huggingface'
Requires-Dist: transformers>=4.30; extra == 'huggingface'
Provides-Extra: langfuse
Requires-Dist: langfuse<3.0.0,>=2.6.0; extra == 'langfuse'
Provides-Extra: onnx
Requires-Dist: numpy>=2.2.6; extra == 'onnx'
Requires-Dist: onnxruntime<1.20,>=1.15; extra == 'onnx'
Requires-Dist: tokenizers>=0.13; extra == 'onnx'
Provides-Extra: openai
Requires-Dist: openai>=1.0; extra == 'openai'
Provides-Extra: otel
Requires-Dist: opentelemetry-api>=1.20.0; extra == 'otel'
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc>=1.20.0; extra == 'otel'
Requires-Dist: opentelemetry-exporter-otlp-proto-http>=1.20.0; extra == 'otel'
Requires-Dist: opentelemetry-sdk>=1.20.0; extra == 'otel'
Provides-Extra: serve
Requires-Dist: fastapi>=0.100; extra == 'serve'
Requires-Dist: uvicorn[standard]>=0.20; extra == 'serve'
Requires-Dist: websockets>=11.0; extra == 'serve'
Provides-Extra: standard
Requires-Dist: aiohttp>=3.8; extra == 'standard'
Requires-Dist: fastapi>=0.100; extra == 'standard'
Requires-Dist: httpx>=0.24; extra == 'standard'
Requires-Dist: langfuse<3.0.0,>=2.6.0; extra == 'standard'
Requires-Dist: numpy>=2.2.6; extra == 'standard'
Requires-Dist: openai>=1.0; extra == 'standard'
Requires-Dist: opentelemetry-api>=1.20.0; extra == 'standard'
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc>=1.20.0; extra == 'standard'
Requires-Dist: opentelemetry-exporter-otlp-proto-http>=1.20.0; extra == 'standard'
Requires-Dist: opentelemetry-sdk>=1.20.0; extra == 'standard'
Requires-Dist: uvicorn[standard]>=0.20; extra == 'standard'
Requires-Dist: websockets>=11.0; extra == 'standard'
Description-Content-Type: text/markdown

# Operonx

<p align="center">
  <a href="https://github.com/batman1m2001-cyber/Operonx/actions/workflows/tests.yaml"><img src="https://github.com/batman1m2001-cyber/Operonx/actions/workflows/tests.yaml/badge.svg?branch=main" alt="Tests"></a>
  <a href="https://github.com/batman1m2001-cyber/Operonx/actions/workflows/format.yaml"><img src="https://github.com/batman1m2001-cyber/Operonx/actions/workflows/format.yaml/badge.svg?branch=main" alt="Format"></a>
  <a href="https://github.com/batman1m2001-cyber/Operonx/actions/workflows/rust-runtime.yaml"><img src="https://github.com/batman1m2001-cyber/Operonx/actions/workflows/rust-runtime.yaml/badge.svg?branch=main" alt="Rust"></a>
  <a href="https://batman1m2001-cyber.github.io/Operonx/"><img src="https://github.com/batman1m2001-cyber/Operonx/actions/workflows/docs.yaml/badge.svg?branch=main" alt="Docs"></a>
  <a href="https://codecov.io/gh/batman1m2001-cyber/Operonx"><img src="https://codecov.io/gh/batman1m2001-cyber/Operonx/branch/main/graph/badge.svg" alt="Coverage"></a>
  <a href="https://pypi.org/project/operonx/"><img src="https://img.shields.io/pypi/v/operonx?label=PyPI" alt="PyPI"></a>
  <a href="https://crates.io/crates/operonx"><img src="https://img.shields.io/crates/v/operonx?label=crates.io" alt="crates.io"></a>
  <img src="https://img.shields.io/badge/python-3.10%2B-blue" alt="Python">
  <a href="https://github.com/batman1m2001-cyber/Operonx/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-Apache%202.0-green" alt="License"></a>
</p>

**Operonx** is a workflow engine that runs anything as a workflow — from IO-bound AI tasks (LLMs, agents, RAG) to CPU-bound workloads needing native performance. Define complex pipelines as DAGs with async execution, built-in tracing, and a dual Python/Rust backend.

## Why Operonx?

- **DAG-based workflows** — nodes and edges, inspired by Airflow operators
- **Dual backend** — Python for flexibility, Rust for raw speed (~8x faster on pure-compute)
- **Built-in tracing** — Langfuse + OpenTelemetry, plus a local viewer
- **Provider agnostic** — OpenAI, Azure, Gemini, Anthropic, vLLM, ONNX — swap with one line
- **Type-safe state** — O(1) state access with schema validation

## Quick Start

```bash
pip install operonx
```

```python
import asyncio
from operonx.core import Operon, GraphOp, op, START, END, PARENT

@op
def greet(name: str):
    return {"message": f"Hello, {name}!"}

async def main():
    with GraphOp(name="hello") as graph:
        step = greet(name=PARENT["name"])
        START >> step >> END

    result = await Operon(graph).run(inputs={"name": "World"})
    print(result["message"])  # Hello, World!

asyncio.run(main())
```

## LLM Integration

```bash
pip install "operonx[standard]"
```

Configure resources in `resources.yaml` and credentials in `.env`, then:

```python
import asyncio
import operonx
from operonx.core import Operon, GraphOp, START, END, PARENT
from operonx.providers import chat

async def main():
    operonx.bootstrap()  # loads ./.env + ./resources.yaml

    with GraphOp(name="chat") as graph:
        c = chat(
            resource="gpt-4o",
            template={"system": "You are a helpful assistant.", "user": "{question}"},
            question=PARENT["question"],
        )
        START >> c >> END

    result = await Operon(graph).run(inputs={"question": "What is Python?"})
    print(result["content"])

asyncio.run(main())
```

See [Resource Setup](CLAUDE.md#resource-setup-bootstrap--resourcehub) for details on `bootstrap()` and `resources.yaml`.

## Installation

Operonx is a single Python package with optional extras for each integration:

```bash
pip install operonx                    # Core engine, no providers
pip install "operonx[standard]"        # Recommended — OpenAI + Langfuse + OTEL + serve
pip install "operonx[anthropic]"       # Anthropic-only
pip install "operonx[onnx]"            # Local ONNX inference
pip install "operonx[serve]"           # FastAPI + uvicorn HTTP server
pip install "operonx[all]"             # All providers and tracers (excludes huggingface)
```

| Extra | Contents |
|-------|----------|
| `standard` | OpenAI, Langfuse, OpenTelemetry, FastAPI/uvicorn |
| `anthropic` | Anthropic SDK |
| `gemini` | Google Vertex AI |
| `bedrock` | AWS Bedrock |
| `onnx` | ONNX Runtime + tokenizers |
| `huggingface` | transformers + torch (heavy — ~2.5 GB) |
| `langfuse` | Langfuse tracer |
| `otel` | OpenTelemetry tracer |
| `serve` | FastAPI + uvicorn HTTP server |
| `all` | Everything except `huggingface` |
| `dev` | pytest, ruff, mkdocs |

Rust users:

```bash
cargo add operonx
```

## Tracing

```python
from operonx.telemetry.tracers import LangfuseTracer

engine = Operon(graph, tracer=LangfuseTracer(resource="langfuse:default"))
```

Backends supported: Langfuse, OpenTelemetry. Configure via `resources.yaml`.

## Documentation

| Need | Go to |
|------|-------|
| Runnable examples | [examples/](examples/) |
| Architecture | [docs/architecture/](docs/architecture/) |
| User guide | [docs/guide/](docs/guide/) |
| API reference | [https://batman1m2001-cyber.github.io/Operonx/](https://batman1m2001-cyber.github.io/Operonx/) |

## Contributing

See [CONTRIBUTING.md](CONTRIBUTING.md).

```bash
git clone https://github.com/batman1m2001-cyber/Operonx.git
cd Operonx
uv sync --all-extras
pre-commit install
uv run pytest tests/ -m "not integration"
```

## License

Apache 2.0
