Metadata-Version: 2.4
Name: albusos
Version: 0.16.6
Summary: AlbusOS - Framework for building multi-agent systems with pathway-based execution
Keywords: ai,agent,llm,pathway,orchestration,skills
Author: Albus Studio
License-Expression: MIT
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Application Frameworks
Classifier: Typing :: Typed
Requires-Dist: pydantic>=2.0,<3
Requires-Dist: pydantic-graph>=1.47.0
Requires-Dist: pydantic-settings>=2.0,<3
Requires-Dist: pyyaml>=6.0.3,<7
Requires-Dist: python-dotenv>=1.0,<2
Requires-Dist: httpx>=0.27.0,<0.28
Requires-Dist: idna>=3.6,<4
Requires-Dist: jinja2>=3.1,<4
Requires-Dist: mcp>=1.26,<2 ; extra == 'mcp'
Requires-Dist: aiohttp>=3.9,<4 ; extra == 'ollama'
Requires-Dist: ddgs>=9.0,<10 ; extra == 'web'
Requires-Python: >=3.13, <3.14
Project-URL: Homepage, https://github.com/albusstudio/albusOS
Project-URL: Repository, https://github.com/albusstudio/albusOS
Project-URL: Issues, https://github.com/albusstudio/albusOS/issues
Provides-Extra: mcp
Provides-Extra: ollama
Provides-Extra: web
Description-Content-Type: text/markdown

# AlbusOS

Python framework for building agentic workflows as composable state graphs.

```
pip install albusos
```

---

## Quick Start

**Requires Python 3.13+**

```bash
pip install albusos
export OPENROUTER_API_KEY="..."   # or OPENAI_API_KEY, or run Ollama locally
```

### Simple agent (LLM + tools loop)

```python
import asyncio
from albusos import agent, run

researcher = agent(
    "researcher",
    instructions="Research topics and provide concise summaries.",
    tools=["web.*", "memory.*"],
)

async def main():
    result = await run(researcher, "What is quantum computing?")
    print(result.response)

asyncio.run(main())
```

`agent()` auto-loads tools and LLM providers. `run()` wires the engine internally.
For most single-agent use cases, this is all you need.

### Multi-turn conversations

```python
from albusos import agent, Session

researcher = agent("researcher", instructions="Research topics.", tools=["web.*"])

async def main():
    session = Session(researcher)
    r1 = await session.run("What is quantum computing?")
    r2 = await session.run("Tell me more about qubits specifically")
    print(r2.response)  # Full conversation context

asyncio.run(main())
```

### Custom pathways (where the real power is)

When you need explicit multi-step workflows -- branching, chaining tools, routing between agents -- you compose them as executable graphs using `PathwayBuilder`:

```python
from albusos import PathwayBuilder, AgentBuilder, run

# A triage workflow: lookup → classify → branch → act
triage = (
    PathwayBuilder("triage", pathway_id="triage")
    .tool("lookup", "servicem8.search_customer", args={"query": "{{input.goal}}"})
    .llm("classify", "Classify urgency based on: {{lookup.output}}", model="fast")
    .conditional("check", "{{classify.output.urgency}} == 'high'", "escalate", "standard")
    .llm("escalate", "Create urgent job: {{input.goal}}", tools=["servicem8.*"])
    .llm("standard", "Create standard job: {{input.goal}}", tools=["servicem8.*"])
    .connect("input", "lookup")
    .connect("lookup", "classify")
    .connect("classify", "check")
    .connect("check", "escalate")
    .connect("check", "standard")
    .connect("escalate", "output")
    .connect("standard", "output")
    .build()
)

agent_def = AgentBuilder().id("dispatch").pathway("triage").tool("servicem8.*").build()

async def main():
    result = await run(agent_def, "Toilet overflow at 42 Smith St", pathway=triage)
    print(result.response)
```

The pathway gets: parallel execution, timeouts, execution budgets, observability,
and the ability to nest inside other pathways -- for free. You declare the workflow;
the VM handles the execution.

### Loading custom tools

```python
from albusos import load_tools, load_skill

# Load a directory of tool scripts (each .py with async def run())
load_tools("skills/servicem8/tools", namespace="servicem8")

# Or load a full skill (SKILL.md + tools/ + auto-registration)
load_skill("skills/servicem8")
```

---

## What is AlbusOS?

AlbusOS gives you three things:

1. **Simple agents** -- `agent()` + `run()` for LLM-with-tools. The on-ramp.
2. **Composable workflows** -- `PathwayBuilder` for multi-step agentic state graphs. The main event.
3. **Multi-agent orchestration** -- `agent.turn` and `agent.list` for routing between specialized agents.

```
albusos (the framework)                 Your repo (the product)
├── core/           Pathway VM, nodes   ├── skills/       SKILL.md + tools/
├── stdlib/         LLM routing, tools  ├── agents.py     Agent definitions
└── infrastructure/ Sandbox, tools      └── app.py        Your transport (FastAPI, etc.)
```

**AlbusOS handles:** Execution engine, LLM routing, tool registry, built-in tools, observability, state management, pathway composition.

**Your repo handles:** Domain tools, agent configs, workflows, and transport.

---

## Writing Tools

Each tool is a single Python file with an `async def run()` function:

```python
"""Search for ServiceM8 jobs by status."""

from albusos import ToolOutput


async def run(status: str = "open", limit: int = 20) -> ToolOutput:
    """
    Args:
        status: Job status filter (open, completed, all)
        limit: Maximum results to return
    """
    jobs = await servicem8_api.list_jobs(status=status, limit=limit)
    return ToolOutput(success=True, data={"jobs": jobs})
```

Place tools inside a skill directory:

```
skills/
└── servicem8/
    ├── SKILL.md              # Instructions for the agent
    └── tools/
        ├── list_jobs.py      # → servicem8.list_jobs
        ├── create_job.py     # → servicem8.create_job
        └── update_status.py  # → servicem8.update_status
```

Tools are auto-discovered and named `{skill}.{file}`. No decorators, no
registration, no class hierarchies.

---

## Pathways

Pathways are composable state graphs. `agent()` uses the built-in tool-calling
loop by default. `PathwayBuilder` lets you compose custom workflows when you
need explicit control.

### Node types

| Type | Builder method | What it does |
|------|---------------|-------------|
| `input` | `.input()` | Declare pathway inputs with schema |
| `output` | `.output()` | Map pathway outputs from upstream nodes |
| `llm` | `.llm()` | LLM call with optional tool-calling loop |
| `tool` | `.tool()` | Call any registered tool |
| `conditional` | `.conditional()` | Branch on a condition (if/else routing) |
| `transform` | `.transform()` | Evaluate a safe expression |
| `pathway` | `.sub_pathway()` | Nest a sub-pathway (composition) |
| `code_execute` | `.code_execute()` | Run sandboxed Python code |
| `loop` | `.loop_node()` | Iterate body nodes until condition met |
| `stage` | `.stage()` | Stateful workflow stage with transitions |
| `checkpoint` | `.checkpoint()` | Pause for human approval / persistence |

### Execution modes

| Mode | Behavior | Use when |
|------|----------|----------|
| `dag` (default) | Parallel, no cycles | Pipelines, fan-out/fan-in |
| `stateful` | Sequential, cycles OK | Conversations, human-in-the-loop |

### Template expressions

Reference upstream node outputs anywhere with `{{node_id.output}}` or `{{node_id.output.field}}`:

```python
.llm("summarize", "Summarize: {{search.output.results}}")
.tool("fetch", "web.fetch", args={"url": "{{input.url}}"})
.conditional("check", "{{classify.output.urgent}} == true", "fast_path", "slow_path")
```

### Composition

Pathways can nest inside other pathways, enabling modular workflow design:

```python
research = PathwayBuilder("research", pathway_id="research").llm("r", "...").build()
summarize = PathwayBuilder("summarize", pathway_id="summarize").llm("s", "...").build()

pipeline = (
    PathwayBuilder("full", pathway_id="full")
    .sub_pathway("step1", research)
    .sub_pathway("step2", summarize)
    .connect("input", "step1")
    .connect("step1", "step2")
    .connect("step2", "output")
    .build()
)
```

---

## Architecture

```
src/
├── albusos/           Public API (start here)
│   ├── agent()            One-call agent factory
│   ├── run()              Zero-wiring execution
│   ├── Session            Multi-turn conversations
│   ├── load_tools()       Load custom tool scripts
│   ├── load_skill()       Load a full skill directory
│   └── load_workspace()   Convention-based project discovery
├── core/              Engine (framework internals)
│   ├── runner.py          Session, default pathway, wiring
│   ├── agent.py           Agent runtime + AgentRepository
│   ├── config.py          Pydantic Settings (env vars, .env)
│   ├── builders/          PathwayBuilder, AgentBuilder, SkillBuilder
│   ├── pathways/          VM, nodes, DAG/stateful schedulers
│   ├── llm/               Provider protocol + capability routing + retry
│   ├── types/             Pydantic models (AgentDefinition, etc.)
│   └── protocols/         Interfaces (PathwayVMLike, StateStoreLike)
├── stdlib/            Built-in capabilities
│   ├── primitives/        Tools (web, memory, workspace, shell, code)
│   └── bootstrap.py       load_stdlib() — auto-loads tools + providers
└── infrastructure/    Sandbox, tool loader
```

### Layering rules

- `core/` has zero imports from `stdlib/` or `albusos/`
- `stdlib/` imports from `core/` only
- `infrastructure/` imports from `core/` only
- `albusos/` imports from `core/` and `stdlib/`

### Key imports

```python
# Simple agents
from albusos import agent, run, Session

# Custom pathways
from albusos import PathwayBuilder, AgentBuilder, ToolOutput

# Load custom tools / skills
from albusos import load_tools, load_skill, load_workspace

# Types
from albusos import AgentDefinition, Pathway, PathwayMode, ExecutionBudget, ExecutionResult

# Advanced (direct LLM access)
from core.llm import generate, get_provider
from core.llm.providers import ModelCapability, set_runtime_model_config
```

---

## Built-in Tools

Loaded automatically by `agent()` and `run()`:

| Tool | What it does |
|------|-------------|
| `web.search` | DuckDuckGo search |
| `web.fetch` | Fetch a URL (with HTTP error handling) |
| `memory.get` / `memory.set` / `memory.search` | Per-agent key-value memory |
| `memory.shared_get` / `memory.shared_set` | Cross-agent shared memory (atomic writes) |
| `workspace.read_file` / `workspace.write_file` / `workspace.list_files` | File I/O |
| `shell.execute` | Run shell commands |
| `code.execute` | Sandboxed Python execution |
| `code.run_test` | Run pytest tests |
| `agent.turn` / `agent.list` | Multi-agent orchestration |

---

## Model Routing

Capability-based model selection -- swap models without changing agent code:

| Capability | Use for | Default |
|------------|---------|---------|
| `fast` | Quick tasks, routing | openai/gpt-4o-mini |
| `reasoning` | Complex thinking | openai/gpt-4o |
| `code` | Code generation | anthropic/claude-3.5-sonnet |
| `vision` | Image understanding | openai/gpt-4o |
| `local` | Offline/free | llama3.1:8b (Ollama) |

```python
# Capability name (recommended) — portable across providers
agent("a", model="reasoning")

# Explicit model (when you need a specific one)
agent("a", model="openai/gpt-4o")
```

Override at runtime via environment or code:

```bash
# Environment variables
export ALBUS_MODEL_FAST="anthropic/claude-haiku"
export ALBUS_MODEL_REASONING="anthropic/claude-sonnet-4"
```

```python
# Runtime code
from core.llm.providers import set_runtime_model_config
set_runtime_model_config({"reasoning": "anthropic/claude-sonnet-4"})
```

---

## Configuration

AlbusOS uses Pydantic Settings for centralized config. All env vars are read
from the environment and `.env` automatically.

| Variable | Purpose | Default |
|----------|---------|---------|
| `OPENROUTER_API_KEY` | OpenRouter API key (200+ models) | — |
| `OPENAI_API_KEY` | Direct OpenAI access (bypasses OpenRouter) | — |
| `OLLAMA_HOST` | Ollama server URL | `http://localhost:11434` |
| `ALBUS_MODEL_FAST` | Override fast model | openai/gpt-4o-mini |
| `ALBUS_MODEL_REASONING` | Override reasoning model | openai/gpt-4o |
| `ALBUS_MODEL_CODE` | Override code model | anthropic/claude-3.5-sonnet |
| `ALBUS_LLM_MAX_RETRIES` | LLM retry count (0-10) | 3 |
| `ALBUS_LLM_RETRY_BASE_DELAY` | Retry base delay seconds | 1.0 |

See `env.example` for a complete template.

---

## License

MIT
