Metadata-Version: 2.4
Name: dendrux
Version: 0.2.0a3
Summary: The runtime for agents that act in the real world.
Project-URL: Homepage, https://github.com/dendrux/dendrux
Project-URL: Repository, https://github.com/dendrux/dendrux
Author: Dendrux Contributors
License: Apache-2.0
License-File: LICENSE
Keywords: agents,ai,llm,runtime,tools
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Software Development :: Libraries :: Application Frameworks
Classifier: Typing :: Typed
Requires-Python: >=3.11
Requires-Dist: aiosqlite<1.0,>=0.19
Requires-Dist: alembic<2.0,>=1.13
Requires-Dist: pydantic<3.0,>=2.0
Requires-Dist: python-ulid<4.0,>=2.0
Requires-Dist: pyyaml<7.0,>=6.0
Requires-Dist: rich<15.0,>=13.0
Requires-Dist: sqlalchemy[asyncio]<3.0,>=2.0
Requires-Dist: typer<1.0,>=0.12
Provides-Extra: all
Requires-Dist: anthropic<1.0,>=0.40; extra == 'all'
Requires-Dist: asyncpg<1.0,>=0.29; extra == 'all'
Requires-Dist: fastapi<1.0,>=0.135; extra == 'all'
Requires-Dist: mcp<2,>=1.8; extra == 'all'
Requires-Dist: openai<3.0,>=1.50; extra == 'all'
Requires-Dist: opentelemetry-api<2.0,>=1.30; extra == 'all'
Requires-Dist: presidio-analyzer<3,>=2.2; extra == 'all'
Requires-Dist: uvicorn[standard]; extra == 'all'
Provides-Extra: anthropic
Requires-Dist: anthropic<1.0,>=0.40; extra == 'anthropic'
Provides-Extra: db
Provides-Extra: dev
Requires-Dist: httpx>=0.27; extra == 'dev'
Requires-Dist: mypy>=1.10; extra == 'dev'
Requires-Dist: opentelemetry-api<2.0,>=1.30; extra == 'dev'
Requires-Dist: opentelemetry-sdk<2.0,>=1.30; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.23; extra == 'dev'
Requires-Dist: pytest-cov>=5.0; extra == 'dev'
Requires-Dist: pytest>=8.0; extra == 'dev'
Requires-Dist: python-dotenv>=1.0; extra == 'dev'
Requires-Dist: ruff>=0.8; extra == 'dev'
Requires-Dist: types-pyyaml>=6.0; extra == 'dev'
Provides-Extra: http
Requires-Dist: fastapi<1.0,>=0.135; extra == 'http'
Requires-Dist: uvicorn[standard]; extra == 'http'
Provides-Extra: mcp
Requires-Dist: mcp<2,>=1.8; extra == 'mcp'
Provides-Extra: openai
Requires-Dist: openai<3.0,>=1.50; extra == 'openai'
Provides-Extra: otel
Requires-Dist: opentelemetry-api<2.0,>=1.30; extra == 'otel'
Provides-Extra: postgres
Requires-Dist: asyncpg<1.0,>=0.29; extra == 'postgres'
Provides-Extra: presidio
Requires-Dist: presidio-analyzer<3,>=2.2; extra == 'presidio'
Description-Content-Type: text/markdown

# dendrux

> Python SDK for Dendrux — the framework for building agents with tools, persistence, and observability.

**Version:** 0.1.0a1

## Install

```bash
pip install "dendrux[all]"              # Everything
pip install "dendrux[anthropic,db]"     # Just Anthropic + SQLite
pip install "dendrux[openai,db]"        # Just OpenAI + SQLite
```

## Quick Example

```python
import asyncio
from dendrux import Agent, tool
from dendrux.llm.anthropic import AnthropicProvider

@tool()
async def add(a: int, b: int) -> int:
    """Add two numbers."""
    return a + b

async def main():
    async with Agent(
        provider=AnthropicProvider(model="claude-sonnet-4-6"),
        prompt="You are a calculator.",
        tools=[add],
    ) as agent:
        result = await agent.run("What is 15 + 27?")
        print(result.answer)

asyncio.run(main())
```

## Providers

| Provider | Import | Use case |
|----------|--------|----------|
| Anthropic | `from dendrux.llm.anthropic import AnthropicProvider` | Claude models |
| OpenAI | `from dendrux.llm.openai import OpenAIProvider` | GPT models + vLLM, SGLang, Groq, Ollama |
| OpenAI Responses | `from dendrux.llm.openai_responses import OpenAIResponsesProvider` | GPT + built-in tools (web search) |
| Mock | `from dendrux.llm.mock import MockLLM` | Deterministic testing |

## API Quick Reference

```python
from pathlib import Path
from dendrux.observers.console import ConsoleObserver

async with Agent(
    provider=provider,                  # Required: LLM provider
    prompt="...",                        # Required: system prompt
    tools=[add],                         # Optional: tool functions
    database_url=f"sqlite+aiosqlite:///{Path.home() / '.dendrux' / 'dendrux.db'}",
    redact=my_scrubber,                  # Optional: scrub persisted strings
) as agent:
    result = await agent.run(
        "What is 15 + 27?",
        observer=ConsoleObserver(),      # Optional: terminal output
        tenant_id="org-123",             # Optional: multi-tenant isolation
        metadata={"thread": "t1"},       # Optional: your linking data
    )
```

### RunResult

```python
result.answer          # str | None — the agent's final answer
result.status          # RunStatus — SUCCESS, ERROR, MAX_ITERATIONS, WAITING_CLIENT_TOOL, CANCELLED
result.steps           # list[AgentStep] — full reasoning chain
result.iteration_count # int — how many loop iterations ran
result.usage           # UsageStats — input_tokens, output_tokens, total_tokens
result.run_id          # str — unique run identifier (ULID)
result.error           # str | None — error message if status is ERROR
```

### Tool Options

```python
@tool()                                  # Basic server tool
@tool(target="client")                   # Client-side — agent pauses
@tool(max_calls_per_run=3)               # Limit calls per run
@tool(timeout_seconds=120)               # Custom timeout (default 120s)
@tool(parallel=False)                    # Run alone, not concurrently
```

## Full Documentation

See the [full documentation on GitHub](https://github.com/dendrux/dendrux) for provider setup, configuration, database guide, CLI, dashboard, observer system, and examples.
