Metadata-Version: 2.4
Name: heddle-ai
Version: 0.9.1
Summary: Lightweight Orchestrated Operational Mesh — Actor-based multi-LLM agent framework
Project-URL: Homepage, https://github.com/getheddle/heddle
Project-URL: Documentation, https://getheddle.github.io/heddle/
Project-URL: Repository, https://github.com/getheddle/heddle
Project-URL: Issues, https://github.com/getheddle/heddle/issues
Author-email: Iran Transition Project <admin@irantransitionproject.org>
License: MPL-2.0
License-File: LICENSE
Keywords: agents,ai,llm,mcp,multi-agent,nats,orchestration,rag
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: Mozilla Public License 2.0 (MPL 2.0)
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Application Frameworks
Classifier: Typing :: Typed
Requires-Python: >=3.11
Requires-Dist: click>=8.1.0
Requires-Dist: httpx>=0.27.0
Requires-Dist: nats-py>=2.7.0
Requires-Dist: pydantic>=2.5.0
Requires-Dist: pyyaml>=6.0
Requires-Dist: structlog>=24.1.0
Requires-Dist: tiktoken>=0.7.0
Provides-Extra: chatbridge
Requires-Dist: openai>=1.0.0; extra == 'chatbridge'
Provides-Extra: council
Provides-Extra: dev
Requires-Dist: croniter>=2.0.0; extra == 'dev'
Requires-Dist: deepeval>=3.8.0; extra == 'dev'
Requires-Dist: duckdb>=1.0.0; extra == 'dev'
Requires-Dist: fastmcp>=3.1; extra == 'dev'
Requires-Dist: lancedb>=0.15.0; extra == 'dev'
Requires-Dist: mkdocs-material>=9.5; extra == 'dev'
Requires-Dist: mkdocstrings[python]>=0.24; extra == 'dev'
Requires-Dist: pyarrow>=15.0.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.23; extra == 'dev'
Requires-Dist: pytest-cov>=5.0.0; extra == 'dev'
Requires-Dist: pytest>=8.0; extra == 'dev'
Requires-Dist: redis>=5.0.0; extra == 'dev'
Requires-Dist: ruff>=0.3.0; extra == 'dev'
Provides-Extra: docproc
Requires-Dist: docling>=2.0.0; extra == 'docproc'
Provides-Extra: docs
Requires-Dist: mkdocs-material>=9.5; extra == 'docs'
Requires-Dist: mkdocstrings[python]>=0.24; extra == 'docs'
Provides-Extra: duckdb
Requires-Dist: duckdb>=1.0.0; extra == 'duckdb'
Provides-Extra: eval
Requires-Dist: deepeval>=3.8.0; extra == 'eval'
Provides-Extra: lancedb
Requires-Dist: lancedb>=0.15.0; extra == 'lancedb'
Requires-Dist: pyarrow>=15.0.0; extra == 'lancedb'
Provides-Extra: local
Requires-Dist: ollama>=0.3.0; extra == 'local'
Provides-Extra: mcp
Requires-Dist: fastmcp>=3.1; extra == 'mcp'
Provides-Extra: mdns
Requires-Dist: zeroconf>=0.131.0; extra == 'mdns'
Provides-Extra: otel
Requires-Dist: opentelemetry-api>=1.20.0; extra == 'otel'
Requires-Dist: opentelemetry-exporter-otlp>=1.20.0; extra == 'otel'
Requires-Dist: opentelemetry-sdk>=1.20.0; extra == 'otel'
Provides-Extra: rag
Requires-Dist: duckdb>=1.0.0; extra == 'rag'
Requires-Dist: requests>=2.31.0; extra == 'rag'
Provides-Extra: redis
Requires-Dist: redis>=5.0.0; extra == 'redis'
Provides-Extra: scheduler
Requires-Dist: croniter>=2.0.0; extra == 'scheduler'
Provides-Extra: telegram
Requires-Dist: telethon>=1.36.0; extra == 'telegram'
Provides-Extra: tui
Requires-Dist: nats-py>=2.7.0; extra == 'tui'
Requires-Dist: textual>=3.0.0; extra == 'tui'
Provides-Extra: workshop
Requires-Dist: duckdb>=1.0.0; extra == 'workshop'
Requires-Dist: fastapi>=0.110.0; extra == 'workshop'
Requires-Dist: jinja2>=3.1.0; extra == 'workshop'
Requires-Dist: python-multipart>=0.0.9; extra == 'workshop'
Requires-Dist: uvicorn>=0.27.0; extra == 'workshop'
Description-Content-Type: text/markdown

# Heddle

[![CI](https://github.com/getheddle/heddle/actions/workflows/ci.yml/badge.svg)](https://github.com/getheddle/heddle/actions/workflows/ci.yml)
[![Docs](https://img.shields.io/badge/docs-GitHub%20Pages-blue)](https://getheddle.github.io/heddle/)
[![codecov](https://codecov.io/github/getheddle/heddle/graph/badge.svg?token=4N0F5XBZW9)](https://codecov.io/github/getheddle/heddle)
[![Ruff](https://img.shields.io/endpoint?url=https://raw.githubusercontent.com/astral-sh/ruff/main/assets/badge/v2.json)](https://github.com/astral-sh/ruff)
[![License: MPL 2.0](https://img.shields.io/badge/License-MPL_2.0-brightgreen.svg)](LICENSE)
[![Python 3.11+](https://img.shields.io/badge/python-3.11%2B-blue.svg)](https://www.python.org/downloads/)
<!-- Keep in sync with pyproject.toml version -->
[![Heddle v0.9.0](https://img.shields.io/badge/loom-v0.9.0-blueviolet.svg)](https://github.com/getheddle/heddle)
[![Status: Active Development](https://img.shields.io/badge/status-active_development-brightgreen.svg)]()

**Turn what you know into testable AI steps. Chain them into workflows.
Measure whether they work. Scale when ready.**

![Architecture Overview](docs/images/architecture-overview.svg)

---

## Try It in 60 Seconds

```bash
pip install heddle-ai[workshop]                         # install from PyPI
heddle setup                                            # configure (auto-detects Ollama)
heddle workshop                                         # open web UI at localhost:8080
```

Open your browser → pick a worker (summarizer, classifier, extractor) →
paste any text → click Run. No data files needed.

**Have Telegram exports?** Install with `pip install heddle-ai[rag]` instead,
then run `heddle rag ingest`, `heddle rag search`, and `heddle rag serve` for
full social media stream analysis.

Or from source:

```bash
git clone https://github.com/getheddle/heddle.git && cd heddle
uv sync --extra workshop
uv run heddle setup
```

No servers to run. No configuration files to write. The setup wizard handles everything.

---

## What Heddle Does

Most AI tools give you one big prompt and one model. That works until it
doesn't — the prompt gets unwieldy, you can't test parts independently,
and asking the same model to review its own work doesn't catch real
problems.

Heddle splits AI work into focused **steps**. Each step has a clear job, a
typed contract (so you know what goes in and what comes out), and can use
a different model. You test steps individually, chain them into pipelines,
and measure whether changes help or hurt.

```text
  Document ──► Extract ──► Classify ──► Summarize ──► Report
                 │            │            │
                 │            │            └─ Claude Opus (complex reasoning)
                 │            └─ Ollama local (fast, free)
                 └─ Ollama local (fast, free)
```

Steps run in parallel when they can, and are tested with the built-in
Workshop web UI — all without deploying any infrastructure. When you're
ready to scale, Heddle adds a message bus (NATS) that connects everything
for production use.

The key idea: the bottleneck with AI is never the model's knowledge —
it's your ability to give it clear, precise instructions. Heddle makes
those instructions testable, version-tracked, and composable. The deeper
argument for this approach is in **[Why Heddle?](docs/WHY_HEDDLE.md).**

---

## Who This Is For

**Anyone hitting the limits of single-prompt AI.** Whether you're a
student comparing how different models answer questions, a teacher grading
essays and checking for bias, or a city clerk categorizing public comments
— if you need more than one AI step working together, Heddle gives you a
structured way to build that. Start with the six shipped workers in
Workshop. No coding needed.

**Researchers and analysts** — process documents, extract data, build
analytical pipelines. Define your own workers in YAML, test them in
Workshop, iterate until the output matches your judgment. Heddle's knowledge
silos and blind audit pattern let you get genuine adversarial review of
AI-generated analysis — not the pseudo-review you get when the same model
checks its own work.

**AI engineers** — build multi-step LLM workflows with typed contracts,
tool-use, knowledge injection, and pipeline orchestration. Test everything
locally before deploying.

**Platform teams** — deploy to Kubernetes with rate limiting, model tier
management, dead-letter handling, and OpenTelemetry tracing. Scale any
component independently.

---

## Three Ways to Use Heddle

### 1. Workshop (no setup beyond install)

Test shipped workers in the browser — paste text, get results:

```bash
heddle workshop                            # open web UI
# → Workers → summarizer → Test → paste text → Run
```

Six ready-made workers ship with Heddle: **summarizer**, **classifier**,
**extractor**, **translator**, **qa** (question answering with source
citations), and **reviewer** (quality review against configurable criteria).

### 2. Build your own steps (guided)

Scaffold workers and pipelines interactively — YAML is generated for you:

```bash
heddle new worker                   # create a step from prompts
heddle new pipeline                 # chain steps into a workflow
heddle validate configs/workers/*.yaml  # check your configs
heddle workshop                     # test and evaluate in the web UI
```

### 3. Distributed infrastructure (production)

For teams, continuous processing, or high-throughput scenarios:

```bash
heddle router --nats-url nats://localhost:4222
heddle worker --config configs/workers/summarizer.yaml --tier local
heddle pipeline --config configs/orchestrators/my_pipeline.yaml
heddle submit "Analyze the quarterly reports"
```

Scale any component by running more copies — NATS load-balances automatically.

---

## Key Features

| Feature | What It Does |
|---------|-------------|
| **6 Ready-Made Workers** | Summarizer, classifier, extractor, translator, QA, reviewer — chain them immediately |
| **Workshop** | Web UI for testing, evaluating, and comparing step outputs |
| **Built-in Evaluation** | Test suites, scoring, golden dataset baselines, regression detection |
| **Config-Driven** | Define workers in YAML — no Python code needed for LLM steps |
| **Knowledge Silos** | Per-worker access control; blind audit workers can't see what they're reviewing |
| **Pipeline Orchestration** | Chain steps with automatic dependency detection and parallelism |
| **Three Model Tiers** | Local (Ollama), Standard (Claude Sonnet), Frontier (Claude Opus) |
| **Document Processing** | PDF/DOCX extraction via MarkItDown (fast) or Docling (deep OCR) |
| **RAG Pipeline** | Telegram channel ingestion, chunking, vector search (DuckDB or LanceDB) |
| **Multi-Agent Councils** | Multi-round deliberation with protocols (debate, Delphi), convergence detection, transcript management |
| **ChatBridge Adapters** | Use Claude, GPT-4, Ollama, or humans as council participants with session history |
| **MCP Gateway** | Expose any workflow as an MCP server with a single YAML config |
| **Config Wizard** | `heddle setup` auto-detects backends; `heddle new` scaffolds workers/pipelines |
| **Live Monitoring** | TUI dashboard, OpenTelemetry tracing, dead-letter inspection |
| **Deployment** | Docker Compose, Kubernetes manifests, mDNS discovery |

---

## Documentation

Start here:

| Guide | Description |
|-------|-------------|
| **[Concepts](docs/CONCEPTS.md)** | How Heddle works — the mental model in plain language |
| **[Getting Started](docs/GETTING_STARTED.md)** | Install and get your first result |
| **[Why Heddle?](docs/WHY_HEDDLE.md)** | How Heddle compares to other frameworks — and when not to use it |
| **[Workshop Tour](docs/WORKSHOP_TOUR.md)** | What each Workshop screen does and when to use it |
| **[Configuration](docs/CONFIG.md)** | `~/.heddle/config.yaml` reference and priority chain |
| **[CLI Reference](docs/CLI_REFERENCE.md)** | All 19 commands with every flag and default |
| **[Workers Reference](docs/workers-reference.md)** | 6 shipped workers with I/O schemas and examples |

Go deeper:

| Guide | Description |
|-------|-------------|
| [RAG Pipeline](docs/rag-howto.md) | Social media stream analysis end-to-end |
| [Multi-Agent Councils](docs/council-howto.md) | Structured deliberation with multiple LLM agents |
| [Building Workflows](docs/building-workflows.md) | Custom steps, pipelines, tools, knowledge |
| [Workshop](docs/workshop.md) | Web UI architecture and enhancement guide |
| [Architecture](docs/ARCHITECTURE.md) | System design, message flow, NATS subjects |
| [Design Invariants](docs/DESIGN_INVARIANTS.md) | Non-obvious design decisions (read before structural changes) |
| [Troubleshooting](docs/TROUBLESHOOTING.md) | Common issues and solutions |
| [Deployment](docs/LOCAL_DEPLOYMENT.md) | Local, Docker, and [Kubernetes](docs/KUBERNETES.md) |

---

## Current State

| Area | Status | Details |
|------|--------|---------|
| Core framework | Complete | Messages, contracts, config, workspace |
| LLM backends | Complete | Anthropic, Ollama, OpenAI-compatible |
| Workers & processors | Complete | Tool-use, knowledge silos, embeddings |
| Orchestration | Complete | Goal decomposition, pipelines, scheduling |
| RAG pipeline | Complete | Ingest, chunk, embed, search (DuckDB + LanceDB) |
| Workshop web UI | Complete | Test bench, eval runner, pipeline editor |
| MCP gateway | Complete | FastMCP 3.x, session tools, workshop tools |
| Multi-agent deliberation | Complete | Council framework, ChatBridge adapters, 3 protocols |
| Tests | 1807 passing | 90%+ coverage, no infrastructure needed |

---

## Get Involved

**Use it.** Start with `heddle setup` and go from there.

**Contribute.** New step types, contrib packages, test coverage, and docs are
welcome. See [Contributing](docs/CONTRIBUTING.md).

**Report issues.** Bug reports with reproducible steps help the most.

---

## AI-Assisted Development

This project uses Claude (Anthropic) as a development tool.
[`CLAUDE.md`](CLAUDE.md) documents the architecture and design rules for
AI-assisted sessions. AI-generated code meets the same standards as human
contributions: typed messages, stateless workers, validated contracts, tests.

---

## License

[MPL 2.0](LICENSE) — Modified source files must remain open; unmodified files
can be combined with proprietary code. Alternative licensing available for
organizations with copyleft constraints.
Contact: <admin@irantransitionproject.org>

*For governance, succession, and contributor rights, see [GOVERNANCE.md](GOVERNANCE.md).*
