Metadata-Version: 2.4
Name: agentopera
Version: 0.0.7
Project-URL: Documentation, https://github.com/chaoyanghe/agentopera#readme
Project-URL: Issues, https://github.com/chaoyanghe/agentopera/issues
Project-URL: Source, https://github.com/chaoyanghe/agentopera
Author-email: chaoyanghe <choayanghe.com@gmail.com>
License-Expression: MIT
License-File: LICENSE
License-File: LICENSE.txt
Classifier: Development Status :: 4 - Beta
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: Implementation :: CPython
Classifier: Programming Language :: Python :: Implementation :: PyPy
Requires-Python: >=3.8
Requires-Dist: chainlit
Requires-Dist: cookiecutter
Requires-Dist: grpcio-tools~=1.70.0
Requires-Dist: mypy-protobuf
Requires-Dist: mypy==1.13.0
Requires-Dist: packaging
Requires-Dist: poethepoet
Requires-Dist: polars
Requires-Dist: pyright==1.1.389
Requires-Dist: pytest
Requires-Dist: pytest-asyncio
Requires-Dist: pytest-cov
Requires-Dist: pytest-mock
Requires-Dist: pytest-xdist
Requires-Dist: rich
Requires-Dist: ruff==0.4.8
Requires-Dist: streamlit
Requires-Dist: tomli
Requires-Dist: tomli-w
Requires-Dist: typer
Description-Content-Type: text/markdown

# Development
```
pip install agentopera
# Host
python run_host.py

# Worker Runtime
uvicorn src.AgentOpera.main:app --host 0.0.0.0 --port 8000 --reload

# Release
```
rm -rf dist build
hatch build
twine upload dist/*
```

# Chat
curl -X POST http://localhost:8000/api/chat -H "Content-Type: application/json" -d '{
	"id": "ZWToutqeUawzfaR7",
	"messages": [{
		"role": "user",
		"content": "introduce TensorOpera AI, please use 10000 words",
		"parts": [{
			"type": "text",
			"text": "tensoropera ai"
		}]
	}],
	"model": "chainopera-default",
	"group": "extreme"
}'

```


# Multi Agent Orchestration, Distributed Agent Runtime Example

This repository is an example of how to run a distributed agent runtime. The system is composed of three main components:

1. The agent host runtime, which is responsible for managing the eventing engine, and the pub/sub message system.
2. The worker runtime, which is responsible for the lifecycle of the distributed agents, including the "semantic router".
3. The user proxy, which is responsible for managing the user interface and the user interactions with the agents.


## Example Scenario

In this example, we have a simple scenario where we have a set of distributed agents (an "HR", and a "Finance" agent) which an enterprise may use to manage their HR and Finance operations. Each of these agents are independent, and can be running on different machines. While many multi-agent systems are built to have the agents collaborate to solve a difficult task - the goal of this example is to show how an enterprise may manage a large set of agents that are suited to individual tasks, and how to route a user to the most relevant agent for the task at hand.

The way this system is designed, when a user initiates a session, the semantic router agent will identify the intent of the user (currently using the overly simple method of string matching), identify the most relevant agent, and then route the user to that agent. The agent will then manage the conversation with the user, and the user will be able to interact with the agent in a conversational manner.

While the logic of the agents is simple in this example, the goal is to show how the distributed runtime capabilities of autogen supports this scenario independantly of the capabilities of the agents themselves.

## Getting Started

1. Install `autogen-core` and its dependencies

## To run

Since this example is meant to demonstrate a distributed runtime, the components of this example are meant to run in different processes - i.e. different terminals.

In 2 separate terminals, run:

```bash
# Terminal 1, to run the Agent Host Runtime
python run_host.py
```

```bash
# Terminal 2, to run the Worker Runtime
python run_semantic_router.py
```

The first terminal should log a series of events where the vrious agents are registered
against the runtime.

In the second terminal, you may enter a request related to finance or hr scenarios.
In our simple example here, this means using one of the following keywords in your request:

- For the finance agent: "finance", "money", "budget"
- For the hr agent: "hr", "human resources", "employee"   

You will then see the host and worker runtimes send messages back and forth, routing to the correct
agent, before the final response is printed.

The conversation can then continue with the selected agent until the user sends a message containing "END",at which point the agent will be disconnected from the user and a new conversation can start.

## Message Flow

Using the "Topic" feature of the agent host runtime, the message flow of the system is as follows:

```mermaid
sequenceDiagram
    participant User
    participant Closure_Agent
    participant User_Proxy_Agent
    participant Semantic_Router
    participant Worker_Agent

    User->>User_Proxy_Agent: Send initial message
    Semantic_Router->>Worker_Agent: Route message to appropriate agent
    Worker_Agent->>User_Proxy_Agent: Respond to user message
    User_Proxy_Agent->>Closure_Agent: Forward message to externally facing Closure Agent
    Closure_Agent->>User: Expose the response to the User
    User->>Worker_Agent: Directly send follow up message
    Worker_Agent->>User_Proxy_Agent: Respond to user message
    User_Proxy_Agent->>Closure_Agent: Forward message to externally facing Closure Agent
    Closure_Agent->>User: Return response
    User->>Worker_Agent: Send "END" message
    Worker_Agent->>User_Proxy_Agent: Confirm session end
    User_Proxy_Agent->>Closure_Agent: Confirm session end
    Closure_Agent->>User: Display session end message
```


# 🚀 Launching the Docker Container for Agents

### ✅ **Build the Docker Image**

Ensure you're in the project root directory:

```bash
docker build -t streaming_agent_app .
```

### ✅ **Run the Docker Container**

Launch the container and expose required ports:

```bash
docker run -d -p 8000:8000 -p 50051:50051 --name streaming_agent_container streaming_agent_app
```

### ✅ **Check Logs and Running Services**

To check logs:
```bash
docker logs streaming_agent_container
```

To verify running services:
```bash
docker exec -it streaming_agent_container supervisorctl status
```

---

# 🔗 **cURL Command to Test the Endpoint**

Send a POST request to the `/chat` endpoint to test the service:

### ✅ **Single-line Command**

```bash
curl -X POST http://localhost:8000/chat/stream -H "Content-Type: application/json" -d '{"message": "Research history of AI?", "id": "test", }'


curl -X POST http://localhost:8000/api/chat -H "Content-Type: application/json" -d '{
	"id": "ZWToutqeUawzfaR7",
	"messages": [{
		"role": "user",
		"content": "introduce TensorOpera AI, please use 10000 words",
		"parts": [{
			"type": "text",
			"text": "tensoropera ai"
		}]
	}],
	"model": "chainopera-default",
	"group": "extreme"
}'
```

### ✅ **Multi-line Command for zsh/bash**

```bash
curl -X POST http://localhost:8000/chat \
-H "Content-Type: application/json" \
-d '{
  "message": "What is our company\'s vacation policy?",
  "user_id": "test"
}'
```

### ✅ **Expected Response**

If everything works correctly, you should see a JSON response like:

```json
{
  "message": "Our vacation policy allows employees to take up to 20 days of paid leave annually.",
  "status": "completed",
  "is_final": true,
  "user_id": "test",
  "conversation_id": "1234-5678"
}
```

---

# ❓ **Troubleshooting**

- Ensure the container is running with:
  ```bash
  docker ps
  ```

- Check for errors in the logs:
  ```bash
  docker logs streaming_agent_container
  ```

- Verify that the `/chat` endpoint is accessible:
  ```bash
  curl -I http://localhost:8000/chat
  ```
