Metadata-Version: 2.4
Name: AgentManager
Version: 0.1.0
Summary: Managing a unified ecosystem of LLM providers, agents, and MCP servers.
Author: Nilavo Boral
Author-email: nilavoboral@gmail.com
Project-URL: LinkedIn, https://www.linkedin.com/in/nilavo-boral-123bb5228/
Classifier: Programming Language :: Python :: 3.11
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Requires-Python: >=3.11
Description-Content-Type: text/markdown
Requires-Dist: langchain==1.0.5
Requires-Dist: langchain-core==1.0.4
Requires-Dist: langchain-mcp-adapters==0.1.12
Requires-Dist: langchain-google-genai==3.0.1
Requires-Dist: langchain-openai==1.0.2
Requires-Dist: langchain-groq==1.0.0
Requires-Dist: langchain-mistralai==1.0.1
Requires-Dist: langchain-ollama==1.0.0
Requires-Dist: nest_asyncio==1.6.0
Dynamic: author
Dynamic: author-email
Dynamic: classifier
Dynamic: description
Dynamic: description-content-type
Dynamic: project-url
Dynamic: requires-dist
Dynamic: requires-python
Dynamic: summary

# 🤖 AgentManager: Managing a unified ecosystem of LLM providers, agents, and MCP servers.

**AgentManager** is a Python package that provides a unified, high-level system for working with Large Language Models (LLMs). It simplifies managing provider credentials, selecting models, and constructing AI agents. It also supports integrating external tools via the Model Context Protocol (MCP) for more advanced workflows.

---

## ✨ Features

* **Universal Provider Support:** Seamlessly connect to various LLM providers (e.g., Google, OpenAI) through a single line code.
* **Agent Construction:** Quickly set up basic agents or advanced tool-powered agents with minimal setup.
* **MCP Integration:** Easily incorporate tools from a MCP Server to enable complex, external actions.
* **Chat Support:** Supports both single-turn interactions and stateful, continuous conversations.

---

### 📄 Requirements

* Python 3.11+

---

## 🛠️ Installation

You can install `agentmanager` directly from PyPI:

```bash
pip install agentmanager
```

---


# ℹ️ Utility Methods

The `CloudAgentManager` provides helpful utility methods for discovering supported providers and models.

| Method                               | Description                                                          |
|--------------------------------------|----------------------------------------------------------------------|
| `cloud_agent_manager.get_providers()`            | Returns a list of all supported LLM provider names.                  |
| `cloud_agent_manager.get_models(provider_name)`  | Returns a list of all available model names for a given provider.   |
| `cloud_agent_manager.get_provider_key(provider_name)` | Returns the URL link where you can obtain your API key for the provider. |

## Example Utility Usage

```Python
from agentmanager import CloudAgentManager

cloud_agent_manager = CloudAgentManager()

# Get all supported providers
providers = cloud_agent_manager.get_providers()
print("Supported Providers:", providers)
# Output might be: ['OpenAI', 'Google', 'Ollama', ...]

# Get models for a specific provider
google_models = cloud_agent_manager.get_models("ollama")
print("Ollama Models:", ollama_models)
    
# Get API key page link
page_link_for_api_key = cloud_agent_manager.get_provider_key("mistral")
print("Mistral Key Link:", page_link_for_api_key)
```

---

# 🚀 Quick Start
The core functionality is encapsulated in the CloudAgentManager class. Here is a basic example for a single-turn chat.

## Single-Turn Chat Example

This script demonstrates initializing an LLM, creating a basic agent, and getting a single response.

```python
import asyncio
from agentmanager import CloudAgentManager

# The following constants would typically be read from environment variables or a secure vault
# Placeholder values are used for demonstration.
PROVIDER = "google"
API_KEY = "YOUR_API_KEY_HERE" # !!! REPLACE WITH YOUR ACTUAL API KEY !!!
MODEL_NAME = "gemini-2.5-flash"

async def single_chat():
    # 1️⃣ Initialize the CloudAgentManager
    cloud_agent_manager = CloudAgentManager()

    # 2️⃣ Prepare the LLM (handles API key and model validation)
    llm = cloud_agent_manager.prepare_llm(PROVIDER, API_KEY, MODEL_NAME)
        
    # 3️⃣ Prepare the agent (no MCP tools in this example)
    agent, tools = await cloud_agent_manager.prepare_agent(llm)
    
    # 4️⃣ Send message
    user_message = "what is the capital of India?"    
    response_messages = await cloud_agent_manager.get_agent_response(agent, user_message)
    
    for m in response_messages:
        print(m.content)


if __name__ == "__main__":
    asyncio.run(single_chat())
```

## 💬 Continuous Chat Loop

For an interactive, multi-turn conversation that maintains context using chat_history.

```python
import asyncio
from agentmanager import CloudAgentManager
from typing import List

# The following constants would typically be read from environment variables or a secure vault
# Placeholder values are used for demonstration.
PROVIDER = "google"
API_KEY = "YOUR_API_KEY_HERE" # !!! REPLACE WITH YOUR ACTUAL API KEY !!!
MODEL_NAME = "gemini-2.5-flash"

async def chat_loop():
    # 1️⃣ Initialize the CloudAgentManager
    cloud_agent_manager = CloudAgentManager()

    # 2️⃣ Prepare LLM
    llm = cloud_agent_manager.prepare_llm(PROVIDER, API_KEY, MODEL_NAME)
        
    # 3️⃣ Prepare agent (e.g., without MCP tools)
    agent, tools = await cloud_agent_manager.prepare_agent(llm)

    # 4️⃣ Initialize chat history
    # The agent will use this list to maintain context across turns.
    chat_history: List[Any] = []

    # 5️⃣ Terminal loop
    print("\n--- Start Chat ---")
    print("Type 'exit' or 'quit' to end the session.")
    while True:
        user_input = input("You: ").strip()
        
        if user_input.lower() in {"exit", "quit"}:
            print("👋 Goodbye!")
            break

        if not user_input:
            continue

        try:
            # The cloud_agent_manager updates chat_history in place
            new_messages = await cloud_agent_manager.get_agent_response(agent, user_input, chat_history)
            
            for m in new_messages:
                print(f"Agent: {m.content}")
                
        except Exception as e:
            print(f"❌ Agent failed to respond: {e}")

if __name__ == "__main__":
    asyncio.run(chat_loop())
```

## ⚙️ Advanced: Agent with MCP Tools

If your agent needs to interact with external services via a Model Context Protocol (MCP), you can pass the configuration during agent preparation.

```python
import asyncio
from agentmanager import CloudAgentManager

# The following constants would typically be read from environment variables or a secure vault
# Placeholder values are used for demonstration.
PROVIDER = "google" # !!! REPLACE WITH PROVIDER YOU WANT TO USE !!!
API_KEY = "YOUR_API_KEY_HERE" # !!! REPLACE WITH PROVIDER'S API KEY !!!
MODEL_NAME = "gemini-2.5-flash" # !!! REPLACE WITH MODEL YOU WANT TO USE !!!

# Define MCP configuration
mcps=[
    {
        "url":"MCP_URL_1" # !!! REPLACE WITH YOUR ACTUAL MCP URL !!!
    },
    {
        "url":"MCP_URL_2", # !!! REPLACE WITH YOUR ACTUAL MCP URL !!!
        # Optional: Custom headers for authentication/routing
        "header":{
            "HEADER_NAME": "HEADER_VALUE" # !!! REPLACE WITH YOUR ACTUAL HEADER NAME & VALUE !!!
        }
    },
    # ...
]

async def mcp_agent_example():

    # 1️⃣ Initialize the CloudAgentManager
    cloud_agent_manager = CloudAgentManager()

    # 2️⃣ Prepare LLM
    llm = cloud_agent_manager.prepare_llm(PROVIDER, API_KEY, MODEL_NAME)
    
    # 3️⃣ Prepare agent with MCP configuration
    agent, tools = await cloud_agent_manager.prepare_agent(llm, mcps)

    # 4️⃣ Initialize chat history
    # The agent will use this list to maintain context across turns.
    chat_history: List[Any] = []

    # 5️⃣ Terminal loop
    print("\n--- Start Chat ---")
    print("Type 'exit' or 'quit' to end the session.")
    while True:
        user_input = input("You: ").strip()
        
        if user_input.lower() in {"exit", "quit"}:
            print("👋 Goodbye!")
            break

        if not user_input:
            continue

        try:
            # The cloud_agent_manager updates chat_history in place
            new_messages = await cloud_agent_manager.get_agent_response(agent, user_input, chat_history)
            
            for m in new_messages:
                print(f"Agent: {m.content}")
                
        except Exception as e:
            print(f"❌ Agent failed to respond: {e}")

if __name__ == "__main__":
    asyncio.run(mcp_agent_example())

```

---
