Metadata-Version: 2.4
Name: agentmatrix-core
Version: 0.7.0.4
Summary: Core execution engine for building AI agent applications — MicroAgent, AgentShell protocol, Cerebellum, and skill system
Author: Agent-Matrix Contributors
License: Apache-2.0
Project-URL: Homepage, https://github.com/webdkt/agentmatrix
Project-URL: Documentation, https://github.com/webdkt/agentmatrix/docs/core
Project-URL: Repository, https://github.com/webdkt/agentmatrix
Project-URL: Issues, https://github.com/webdkt/agentmatrix/issues
Keywords: agents,ai,llm,framework,multi-agent
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Topic :: Software Development :: Libraries :: Application Frameworks
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Requires-Python: >=3.12
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: pyyaml>=6.0
Requires-Dist: python-dotenv>=1.0.0
Requires-Dist: requests>=2.31.0
Requires-Dist: aiohttp>=3.8.0
Requires-Dist: Jinja2>=3.1.0
Requires-Dist: pydantic>=2.0.0
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Requires-Dist: black>=23.0; extra == "dev"
Requires-Dist: flake8>=6.0; extra == "dev"
Requires-Dist: mypy>=1.0; extra == "dev"
Dynamic: license-file

# agentmatrix-core

**Core execution engine for building AI agent applications.**

**Let LLMs think. Don't make them write JSON.**

AgentMatrix separates reasoning from formatting. The large model thinks in natural language. A smaller model (Cerebellum) translates intent into executable parameters. Two models, each doing what they're best at.

## Install

```bash
pip install agentmatrix-core
```

Requires Python 3.12+.

## Architecture

```
┌─────────────────────────────────────────────┐
│  App Layer     Your Application             │
├─────────────────────────────────────────────┤
│  Shell Layer   AgentShell Protocol           │
│                (interface you implement)     │
├─────────────────────────────────────────────┤
│  Core Layer    MicroAgent Engine             │
│                (this package)                │
└─────────────────────────────────────────────┘
```

- **Core Layer** — `MicroAgent` is the execution engine. Pure reasoning loop: think, detect actions, execute, repeat. No I/O, no UI.
- **Shell Layer** — `AgentShell` is the protocol you implement to connect Core to the outside world (LLM clients, prompt templates, session storage, etc.).
- **App Layer** — Your application that wires everything together.

This separation means the same core agent behavior runs anywhere — desktop, terminal, or cloud.

## Quick Start

### 1. Implement AgentShell

`AgentShell` is the interface between the Core engine and your application:

```python
from agentmatrix.core.agent_shell import AgentShell
from agentmatrix.core.micro_agent import MicroAgent

class MyShell(AgentShell):
    # Implement the required methods:
    # - get_llm_client()    → your LLM backend
    # - get_system_prompt() → prompt template
    # - get_session_store() → session persistence
    # - on_action_result()  → handle action outputs
    # - on_agent_message()  → handle agent responses
    ...
```

### 2. Create a MicroAgent and Run

```python
agent = MicroAgent(
    name="my-agent",
    shell=my_shell,
    skills=["file","web-search"],
)

# Start the reasoning loop
await agent.run("List files in the current directory")
```

### 3. See a Working Example

A complete terminal agent (~200 lines) is available in the repository:

```bash
git clone https://github.com/webdkt/agentmatrix.git
cd tutorial/cli-agent

export OPENAI_API_KEY=sk-xxx
python main.py -m https://endpoint-url:deepseek-v4-pro
```

## Key Modules

| Module | Description |
|--------|-------------|
| `core.micro_agent` | The execution engine — think, detect actions, execute, repeat |
| `core.agent_shell` | Shell protocol — implement this for your app |
| `core.cerebellum` | Intent-to-action parameter negotiation |
| `core.action` | Action registry and execution |
| `core.session_store` | Session persistence interface |
| `core.signals` | Event-driven communication (pause, resume, stop) |

## Key Features

### Natural Language Reasoning

The agent's "Brain" reasons entirely in natural language. No JSON output required, no format constraints. A separate "Cerebellum" translates intent into executable parameters.

### Pause, Resume, Stop

Any running agent can be paused, resumed, or stopped via signals. State is preserved at safe checkpoints.

### Context Auto-Compression

When conversation history grows too large, the system automatically compresses it into "Working Notes" — a dynamic state snapshot generated by the LLM. Tasks can run for hours; the context window never overflows.

### Action System

Actions are detected from natural language output via `<action_script>` blocks. The Cerebellum negotiates parameters with the Brain, handles ambiguity, and executes.

### Skill System

Built-in Python skill mixins:

- **base** — Date/time utilities
- **file** — File read/write, search
- **shell** — Shell command execution

Extend with custom Python skills or Markdown-based procedural knowledge.

## Dependencies

- `pyyaml>=6.0`
- `python-dotenv>=1.0.0`
- `requests>=2.31.0`
- `aiohttp>=3.8.0`

## Links

- **Repository**: https://github.com/webdkt/agentmatrix
- **Tutorial**: [tutorial/cli-agent/](https://github.com/webdkt/agentmatrix/tree/main/tutorial/cli-agent)
- **Documentation**: [docs/](https://github.com/webdkt/agentmatrix/tree/main/docs)
- **License**: Apache License 2.0
