Metadata-Version: 2.2
Name: gllm-agents-binary
Version: 0.4.19
Summary: A library for managing agents in Gen AI applications.
Author-email: Raymond Christopher <raymond.christopher@gdplabs.id>
Requires-Python: <3.14,>=3.11
Description-Content-Type: text/markdown
Requires-Dist: a2a-sdk<0.2.12,>=0.2.4
Requires-Dist: bosa-connectors-binary<0.3.0,>=0.2.2
Requires-Dist: bosa-core-binary[logger,telemetry]<0.9.0,>=0.8.0
Requires-Dist: colorama<0.5.0,>=0.4.6
Requires-Dist: deprecated<2.0.0,>=1.2.18
Requires-Dist: fastapi<0.118.0,>=0.117.0
Requires-Dist: gllm-core-binary<0.4.0,>=0.3.0
Requires-Dist: gllm-inference-binary[anthropic,bedrock,google-genai,google-vertexai,openai]<0.6.0,>=0.5.0
Requires-Dist: gllm-tools-binary<0.2.0,>=0.1.1
Requires-Dist: google-adk<0.6.0,>=0.5.0
Requires-Dist: langchain<0.4.0,>=0.3.0
Requires-Dist: langchain-openai<0.4.0,>=0.3.17
Requires-Dist: langgraph<0.3.0,>=0.2.16
Requires-Dist: mem0ai<0.2.0,>=0.1.115
Requires-Dist: minio<8.0.0,>=7.2.16
Requires-Dist: pydantic<3.0.0,>=2.9.1
Requires-Dist: python-dotenv<2.0.0,>=1.0.0
Requires-Dist: requests<3.0.0,>=2.32.4
Requires-Dist: uvicorn<0.35.0,>=0.34.0
Requires-Dist: authlib<1.7.0,>=1.6.4
Provides-Extra: dev
Requires-Dist: coverage<8.0.0,>=7.4.4; extra == "dev"
Requires-Dist: mypy<2.0.0,>=1.15.0; extra == "dev"
Requires-Dist: pre-commit<4.0.0,>=3.7.0; extra == "dev"
Requires-Dist: pytest<9.0.0,>=8.1.1; extra == "dev"
Requires-Dist: pytest-asyncio<0.24.0,>=0.23.6; extra == "dev"
Requires-Dist: pytest-cov<6.0.0,>=5.0.0; extra == "dev"
Requires-Dist: ruff<0.7.0,>=0.6.7; extra == "dev"
Requires-Dist: pillow<12.0.0,>=11.3.0; extra == "dev"

# GLLM Agents

## Description

A library for managing agents in Generative AI applications.

## Installation

### Prerequisites
- Python 3.13+ - [Install here](https://www.python.org/downloads/)
- Pip (if using Pip) - [Install here](https://pip.pypa.io/en/stable/installation/)
- Poetry 1.8.1+ (if using Poetry) - [Install here](https://python-poetry.org/docs/#installation)
- Git (if using Git) - [Install here](https://git-scm.com/downloads)
- For git installation:
  - Access to the [GDP Labs SDK github repository](https://github.com/GDP-ADMIN/gen-ai-internal)

### 1. Installation from Artifact Registry
Choose one of the following methods to install the package:

#### Using pip
```bash
pip install gllm-agents-binary
```

#### Using Poetry
```bash
poetry add gllm-agents-binary
```

### 2. Development Installation (Git)
For development purposes, you can install directly from the Git repository:
```bash
poetry add "git+ssh://git@github.com/GDP-ADMIN/gen-ai-internal.git#subdirectory=libs/gllm-agents"
```

## Managing Dependencies
1. Go to root folder of `gllm-agents` module, e.g. `cd libs/gllm-agents`.
2. Run `poetry shell` to create a virtual environment.
3. Run `poetry lock` to create a lock file if you haven't done it yet.
4. Run `poetry install` to install the `gllm-agents` requirements for the first time.
5. Run `poetry update` if you update any dependency module version at `pyproject.toml`.

## Contributing
Please refer to this [Python Style Guide](https://docs.google.com/document/d/1uRggCrHnVfDPBnG641FyQBwUwLoFw0kTzNqRm92vUwM/edit?usp=sharing)
to get information about code style, documentation standard, and SCA that you need to use when contributing to this project

1. Activate `pre-commit` hooks using `pre-commit install`
2. Run `poetry shell` to create a virtual environment.
3. Run `poetry lock` to create a lock file if you haven't done it yet.
4. Run `poetry install` to install the `gllm-agents` requirements for the first time.
5. Run `which python` to get the path to be referenced at Visual Studio Code interpreter path (`Ctrl`+`Shift`+`P` or `Cmd`+`Shift`+`P`)
6. Try running the unit test to see if it's working:
```bash
poetry run pytest -s tests/unit_tests/
```

## Hello World Examples

### Prerequisites
- Python 3.13+
- Install the binary package:

```bash
pip install gllm-agents-binary
```

- For OpenAI: Set your API key in the environment:
```bash
export OPENAI_API_KEY=your-openai-key
```
- For Google ADK: Set your API key in the environment:
```bash
export GOOGLE_API_KEY=your-google-api-key
```

### Run the Hello World Examples

The example scripts are located in the `gllm_agents/examples` directory within the library. You can run them individually or use the `run_all_examples.py` script.

**1. Running Individual Examples:**

Navigate to the library's root directory (e.g., `libs/gllm-agents` if you cloned the repository).

**LangGraph (OpenAI):**
```bash
python gllm_agents/examples/hello_world_langgraph.py
```

**LangGraph with BOSA Connector (OpenAI):**
```bash
python gllm_agents/examples/hello_world_langgraph_bosa_twitter.py
```

**LangGraph Streaming (OpenAI):**
```bash
python gllm_agents/examples/hello_world_langgraph_stream.py
```

**LangGraph Multi-Agent Coordinator (OpenAI):**
```bash
python gllm_agents/examples/hello_world_a2a_multi_agent_coordinator_server.py
```

**Google ADK:**
```bash
python gllm_agents/examples/hello_world_google_adk.py
```

**Google ADK Streaming:**
```bash
python gllm_agents/examples/hello_world_google_adk_stream.py
```

**LangChain (OpenAI):**
```bash
python gllm_agents/examples/hello_world_langchain.py
```

**LangChain Streaming (OpenAI):**
```bash
python gllm_agents/examples/hello_world_langchain_stream.py
```

**2. Running MCP Examples**

### Prerequisites

Ensure you have set the environment variables for API keys:

```bash
export OPENAI_API_KEY="your-openai-key"
export GOOGLE_API_KEY="your-google-api-key"
```

For examples that use stateful MCP tools like browser automation, start the Playwright MCP server in a separate terminal:

```bash
npx @playwright/mcp@latest --headless --port 8931
```

**Note:** Use the `--headless` flag to run the server without a visible browser window, which is recommended if the browser is not installed yet to avoid failures. For using an actual (non-headless) browser, refer to the [Playwright MCP documentation](https://github.com/microsoft/playwright-mcp).

### Local MCP Servers

For STDIO, SSE, and HTTP transports using local servers, open a terminal in the library root (`libs/gllm-agents`) and run:

- For STDIO:

```bash
poetry run python gllm_agents/examples/mcp_servers/mcp_server_stdio.py
```

- For SSE:

```bash
poetry run python gllm_agents/examples/mcp_servers/mcp_server_sse.py
```

- For HTTP:

```bash
poetry run python gllm_agents/examples/mcp_servers/mcp_server_http.py
```

Note: Start the appropriate server before running the client examples for that transport.

### Running Examples

All examples are run from the library root using `poetry run python gllm_agents/examples/<file>.py`. Examples support OpenAI for LangGraph/LangChain and Google ADK where specified.

#### LangChain Examples

##### STDIO Transport
- Non-Streaming:
```bash
poetry run python gllm_agents/examples/hello_world_langchain_mcp_stdio.py
```
- Streaming:
```bash
poetry run python gllm_agents/examples/hello_world_langchain_mcp_stdio_stream.py
```

##### SSE Transport
- Non-Streaming:
```bash
poetry run python gllm_agents/examples/hello_world_langchain_mcp_sse.py
```
- Streaming:
```bash
poetry run python gllm_agents/examples/hello_world_langchain_mcp_sse_stream.py
```

##### HTTP Transport
- Non-Streaming:
```bash
poetry run python gllm_agents/examples/hello_world_langchain_mcp_http.py
```
- Streaming:
```bash
poetry run python gllm_agents/examples/hello_world_langchain_mcp_http_stream.py
```

#### Google ADK Examples

##### STDIO Transport
- Non-Streaming:
```bash
poetry run python gllm_agents/examples/hello_world_google_adk_mcp_stdio.py
```
- Streaming:
```bash
poetry run python gllm_agents/examples/hello_world_google_adk_mcp_stdio_stream.py
```

##### SSE Transport
- Non-Streaming:
```bash
poetry run python gllm_agents/examples/hello_world_google_adk_mcp_sse.py
```
- Streaming:
```bash
poetry run python gllm_agents/examples/hello_world_google_adk_mcp_sse_stream.py
```

##### HTTP Transport
- Non-Streaming:
```bash
poetry run python gllm_agents/examples/hello_world_google_adk_mcp_http.py
```
- Streaming:
```bash
poetry run python gllm_agents/examples/hello_world_google_adk_mcp_http_stream.py
```

#### LangGraph Examples (OpenAI)

##### STDIO Transport
- Non-Streaming:
```bash
poetry run python gllm_agents/examples/hello_world_langgraph_mcp_stdio.py
```
- Streaming:
```bash
poetry run python gllm_agents/examples/hello_world_langgraph_mcp_stdio_stream.py
```

##### SSE Transport
- Non-Streaming:
```bash
poetry run python gllm_agents/examples/hello_world_langgraph_mcp_sse.py
```
- Streaming:
```bash
poetry run python gllm_agents/examples/hello_world_langgraph_mcp_sse_stream.py
```

##### HTTP Transport
- Non-Streaming:
```bash
poetry run python gllm_agents/examples/hello_world_langgraph_mcp_http.py
```
- Streaming:
```bash
poetry run python gllm_agents/examples/hello_world_langgraph_mcp_http_stream.py
```

### Multi-Server Example

This LangChain example uses multiple MCP servers: Playwright (for browser actions) and a random name generator (SSE transport) with persistent sessions across multiple `arun` calls.

1. Start the Playwright server:

```bash
npx @playwright/mcp@latest --headless --port 8931
```

2. In another terminal, start the Name Generator SSE server:

```bash
poetry run python gllm_agents/examples/mcp_servers/mcp_name.py
```

3. Run the multi-server client example:

```bash
poetry run python gllm_agents/examples/hello_world_langchain_mcp_multi_server.py
```
**3. Running Individual A2A Examples:**

* Navigate to the library's root directory (e.g., `libs/gllm-agents` if you cloned the repository).
* Open a new terminal and navigate to the `gllm_agents/examples` directory to run the A2A server.

**LangChain Server:**
```bash
python hello_world_a2a_langchain_server.py
```

* Open a new terminal and navigate to the `gllm_agents/examples` directory to run the A2A client.

**LangChain Client:**
```bash
python hello_world_a2a_langchain_client.py
```

**LangChain Client Integrated with Agent Workflow:**
```bash
python hello_world_a2a_langchain_client_agent.py
```

**LangChain Client Streaming:**
```bash
python hello_world_a2a_langchain_client_stream.py
```


## Architectural Notes

### Memory Features

The library supports Mem0 as a memory backend for long-term conversation recall. Key features:
- Automatic persistence of user-agent interactions via `memory_backend="mem0"`.
- Semantic search for relevant past conversations.
- New `built_in_mem0_search` tool for explicit recall by time period (e.g., "yesterday", "last week", "July 2025").
- Date range parsing for natural language time filters using `dateparser`.
- Conditional auto-augmentation (disabled by default to reduce noise; enable with `memory_auto_augment=True`).

#### Mem0 Date Recall Example

Use the coordinator example with memory enabled:

```bash
poetry run python gllm_agents/examples/hello_world_a2a_mem0_coordinator_server.py
```

In client:
```python
agent = LangGraphAgent(
    name="client",
    instruction="...",
    model="gpt-4o-mini",
    memory_backend="mem0",
)
```

Test recall: After some interactions, query "What did we discuss yesterday?" – agent uses tool to filter by created_at.

### Agent Interface (`AgentInterface`)

The `gllm_agents.agent.interface.AgentInterface` class defines a standardized contract for all agent implementations within the GLLM Agents ecosystem. It ensures that different agent types (e.g., LangGraph-based, Google ADK-based) expose a consistent set of methods for core operations.

Key methods defined by `AgentInterface` typically include:
- `arun()`: For asynchronous execution of the agent that returns a final consolidated response.
- `arun_stream()`: For asynchronous execution that streams back partial responses or events from the agent.

By adhering to this interface, users can interact with various agents in a uniform way, making it easier to switch between or combine different agent technologies.

### Inversion of Control (IoC) / Dependency Injection (DI)

The agent implementations (e.g., `LangGraphAgent`, `GoogleADKAgent`) utilize Dependency Injection. For instance, `LangGraphAgent` accepts an `agent_executor` (like one created by LangGraph's `create_react_agent`) in its constructor. Similarly, `GoogleADKAgent` accepts a native `adk_native_agent`. This allows the core execution logic to be provided externally, promoting flexibility and decoupling the agent wrapper from the specific instantiation details of its underlying engine.
