Metadata-Version: 2.4
Name: distributed_a2a
Version: 0.1.12
Summary: A library for building A2A agents with routing capabilities
Home-page: https://github.com/Barra-Technologies/distributed-a2a
Author: Fabian Bell
Author-email: Fabian Bell <fabian.bell@barrabytes.com>, Simon Gyimah <simon.gyimah@barrabytes.com>
License: MIT
Project-URL: Homepage, https://github.com/Barra-Technologies-Internal/distributed-a2a
Project-URL: Repository, https://github.com/Barra-Technologies-Internal/distributed-a2a
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: langchain==1.2.3
Requires-Dist: langchain-core==1.2.7
Requires-Dist: langchain-openai==1.1.7
Requires-Dist: langchain_mcp_adapters==0.2.1
Requires-Dist: langgraph==1.0.5
Requires-Dist: langgraph-dynamodb-checkpoint==0.2.6.4
Requires-Dist: pydantic==2.12.5
Requires-Dist: boto3==1.42.25
Requires-Dist: a2a-sdk==0.3.22
Requires-Dist: a2a-types==0.1.0
Requires-Dist: build==1.4.0
Requires-Dist: twine==6.2.0
Requires-Dist: fastapi
Requires-Dist: uvicorn
Dynamic: author
Dynamic: home-page
Dynamic: license-file
Dynamic: requires-python

# A2A Agent Library

A Python library for building A2A (Agent-to-Agent) agents with routing capabilities, DynamoDB-backed registry, and LangChain integration.

## Features

- **StatusAgent**: Base agent implementation with status tracking and structured responses
- **RoutingAgentExecutor**: Agent executor with intelligent routing capabilities
- **DynamoDB Registry**: Dynamic agent card registry with heartbeat mechanism
- **Server Utilities**: FastAPI application builder with A2A protocol support
- **LangChain Integration**: Built on LangChain for flexible model integration

## Installation

```bash
pip install distributed-a2a
```

## Quick Start

1. Start a server with your agent application:
```python
import uvicorn
from distributed_a2a import (
    AgentConfig, 
    AgentItem, 
    CardConfig, 
    SkillConfig, 
    LLMConfig, 
    load_app
)

# Create the agent config directly via the object
agent_config = AgentConfig(
    agent=AgentItem(
        system_prompt="You are a helpful assistant...",
        card=CardConfig(
            name="MyAgent",
            version="1.0.0",
            url="http://localhost:8000",
            description="My specialized agent",
            skills=[
                SkillConfig(
                    id='example_skill',
                    name='Example Skill',
                    description='An example skill',
                    tags=['example']
                )
            ]
        ),
        llm=LLMConfig(
            base_url="https://openrouter.ai/api/v1",
            model="google/gemini-2.0-flash-001",
            api_key_env="API_KEY"
        )
    )
)

# Create your agent application
app = load_app(agent_config=agent_config)

if __name__ == "__main__":
    uvicorn.run(app, host="0.0.0.0", port=8000)
```

2. Send a request with the client
```python
from uuid import uuid4

from distributed_a2a import RoutingA2AClient

if __name__ == "__main__":
    import asyncio

    request = "Tell me the weather in Bonn"
    client = RoutingA2AClient("http://localhost:8000")
    response: str = asyncio.run(client.send_message(request, str(uuid4())))
    print(response)
```

## Local Development Setup

To set up a local running environment for testing and development, you need to create a registry and some sample agents. This guide will walk you through setting up an in-memory registry and two example agents (Joke and Math).

### Prerequisites

1.  **Environment Variables**: You need to set the following environment variables in your terminal sessions:
    *   `API_KEY`: Your LLM provider's API key (e.g., OpenRouter).
    *   `PYTHONPATH`: Ensure the `distributed_a2a` package is in your Python path.

```bash
export API_KEY="your-llm-api-key"
pip install distributed-a2a
```

### 1. Create and Start the In-Memory Registry

Create a file named `start_registry.py` with the following content:

```python
import uvicorn
from distributed_a2a import load_registry, InMemoryAgentRegistry, InMemoryMcpRegistry


def start_in_memory_registry():
    agent_registry = InMemoryAgentRegistry()
    mcp_registry = InMemoryMcpRegistry()
    app = load_registry(agent_registry=agent_registry, mcp_registry=mcp_registry)
    # The port here (8001) must match the port expected by agents in their config
    uvicorn.run(app, host="0.0.0.0", port=8001)

if __name__ == "__main__":
    start_in_memory_registry()
```

Run the registry:
> **Note**: The registry server runs on port `8001` by default in the script above. Ensure that your agents are configured to use this same port for their `registry.agent.url`.

```bash
python3 start_registry.py
```
The registry will be available at `http://localhost:8001`.

### 2. Configure and Start Example Agents

You can start agents by directly instantiating the `AgentConfig` object.

#### Create `start_agent.py`:
```python
import uvicorn
import sys
from distributed_a2a import (
    AgentConfig, 
    AgentItem, 
    RegistryConfig, 
    RegistryItemConfig, 
    CardConfig, 
    SkillConfig, 
    LLMConfig, 
    load_app
)

def start_agent(port: int):
    # Create the agent config directly via the object
    agent_config = AgentConfig(
        agent=AgentItem(
            registry=RegistryConfig(
                agent=RegistryItemConfig(url="http://localhost:8001"),
                mcp=RegistryItemConfig(url="http://localhost:8001")
            ),
            system_prompt="You are a helpful assistant.",
            card=CardConfig(
                name="my-agent",
                version="1.0.0",
                url=f"http://localhost:{port}",
                description="A sample agent",
                default_input_modes=["text", "text/plaintext"],
                default_output_modes=["text", "text/plaintext"],
                preferred_transport_protocol="HTTP+JSON",
                skills=[
                    SkillConfig(
                        id="sample",
                        name="Sample Skill",
                        description="A sample skill",
                        tags=["sample"]
                    )
                ]
            ),
            llm=LLMConfig(
                base_url="https://openrouter.ai/api/v1",
                model="google/gemini-2.0-flash-001",
                api_key_env="API_KEY",
                reasoning_effort="high"
            )
        )
    )

    app = load_app(agent_config=agent_config)
    uvicorn.run(app, host="0.0.0.0", port=port)

if __name__ == "__main__":
    port = int(sys.argv[1]) if len(sys.argv) > 1 else 8080
    start_agent(port)
```

Run an agent:

```bash
python3 start_agent.py 8080
```

### 3. Configure and Start a Router Agent

The Router Agent is a special agent that can route requests to other agents registered in the registry.

#### Create `start_router.py`:
```python
import uvicorn
import sys
from distributed_a2a import (
    RouterConfig, 
    RouterItem, 
    RegistryConfig, 
    RegistryItemConfig, 
    CardConfig, 
    LLMConfig, 
    load_router
)

def start_router(port: int):
    # Create the router config directly via the object
    router_config = RouterConfig(
        router=RouterItem(
            registry=RegistryConfig(
                agent=RegistryItemConfig(url="http://localhost:8001")
            ),
            card=CardConfig(
                name="router",
                version="1.0.0",
                url=f"http://localhost:{port}",
                description="Main entry point router",
                default_input_modes=["text", "text/plaintext"],
                default_output_modes=["text", "text/plaintext"],
                preferred_transport_protocol="HTTP+JSON"
            ),
            llm=LLMConfig(
                base_url="https://openrouter.ai/api/v1",
                model="google/gemini-2.0-flash-001",
                api_key_env="API_KEY",
                reasoning_effort="high"
            )
        )
    )

    app = load_router(router_config=router_config)
    uvicorn.run(app, host="0.0.0.0", port=port)

if __name__ == "__main__":
    port = int(sys.argv[1]) if len(sys.argv) > 1 else 8000
    start_router(port)
```

Run the router:
```bash
python3 start_router.py 8000
```

## Requirements

- Python 3.10+
- langchain
- langchain-core
- langchain-openai
- langgraph
- pydantic
- boto3
- a2a

## License

MIT

## Contributing

Contributions are welcome! Please feel free to submit a Pull Request.
