Metadata-Version: 2.4
Name: mcal-ai-autogen
Version: 0.2.5
Summary: Microsoft AutoGen integration for MCAL - Goal-aware memory for multi-agent systems
Author: MCAL Team
License: MIT
Project-URL: Homepage, https://github.com/Shivakoreddi/mcal-ai
Project-URL: Documentation, https://github.com/Shivakoreddi/mcal-ai/tree/main/packages/mcal-autogen
Project-URL: Repository, https://github.com/Shivakoreddi/mcal-ai
Keywords: mcal,autogen,memory,llm,agents,goal-aware,multi-agent
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: mcal-ai>=0.1.0
Provides-Extra: autogen
Requires-Dist: autogen-core>=0.4.0; extra == "autogen"
Requires-Dist: autogen-agentchat>=0.4.0; extra == "autogen"
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.21.0; extra == "dev"
Requires-Dist: pytest-cov>=4.0; extra == "dev"
Provides-Extra: all
Requires-Dist: mcal-autogen[autogen,dev]; extra == "all"
Dynamic: license-file

# mcal-ai-autogen

Microsoft AutoGen integration for [MCAL](https://github.com/Shivakoreddi/mcal-ai) (Memory-Context Alignment Layer), bringing goal-aware memory to AutoGen agents.

## Installation

```bash
pip install mcal-ai-autogen

# With AutoGen dependencies
pip install mcal-ai-autogen[autogen]
```

## Quick Start

```python
from autogen_agentchat.agents import AssistantAgent
from autogen_ext.models.openai import OpenAIChatCompletionClient
from mcal import MCAL
from mcal_autogen import MCALMemory

# Initialize MCAL
mcal = MCAL(llm_provider="openai")

# Create MCAL-backed memory
memory = MCALMemory(mcal, user_id="user_123")

# Create an agent with MCAL memory
model_client = OpenAIChatCompletionClient(model="gpt-4")
agent = AssistantAgent(
    name="data_engineer",
    model_client=model_client,
    memory=[memory],
    system_message="You are a helpful data engineering assistant.",
)

# Use the agent — MCAL automatically tracks context and decisions
result = await agent.run(task="How should I set up my ETL pipeline?")
```

## Features

### Goal-Aware Memory

MCAL's unique value is understanding your project's goals and maintaining context across conversations:

```python
mcal = MCAL(llm_provider="anthropic")
memory = MCALMemory(mcal)

# Add relevant context
from autogen_core.memory import MemoryContent
await memory.add(MemoryContent(
    content="We decided to use Kafka for streaming",
    mime_type="text/plain",
    metadata={"category": "architecture", "decision": True}
))

# Query returns goal-relevant results
results = await memory.query("What messaging system should I use?")
# Returns Kafka decision with goal-relevance scoring
```

### Decision Tracking

Track architectural and project decisions automatically:

```python
memory = MCALMemory(
    mcal,
    enable_goal_tracking=True,  # Extract goals from content
    include_decisions=True,      # Include decisions in search
)

# Decisions are automatically tracked
await memory.add(MemoryContent(
    content="After evaluating options, we chose PostgreSQL for its JSON support",
    mime_type="text/plain"
))

# Query finds relevant decisions
results = await memory.query("database selection")
```

### User Isolation

Support multi-tenant scenarios with user isolation:

```python
# Create separate memories for different users
user1_memory = MCALMemory(mcal, user_id="alice")
user2_memory = MCALMemory(mcal, user_id="bob")

# Each user has isolated memory
await user1_memory.add(MemoryContent(content="Alice prefers Python"))
await user2_memory.add(MemoryContent(content="Bob prefers Rust"))

# Queries only return user-specific results
results = await user1_memory.query("language preference")
# Only returns Alice's preference
```

### TTL Support

Configure time-to-live for memory entries:

```python
memory = MCALMemory(mcal, default_ttl_minutes=60)  # 1 hour default

# Or per-entry TTL via metadata
await memory.add(MemoryContent(
    content="Temporary context",
    mime_type="text/plain",
    metadata={"ttl_minutes": 15}  # 15 minute TTL
))
```

### Thread Safety

All operations are protected by `RLock` — safe for concurrent access from multiple agents.

## Integration with AutoGen Features

### With AssistantAgent

```python
from autogen_agentchat.agents import AssistantAgent

agent = AssistantAgent(
    name="assistant",
    model_client=model_client,
    memory=[memory],  # MCAL memory integrates seamlessly
)
```

### With Teams

```python
from autogen_agentchat.teams import RoundRobinGroupChat

# Share MCAL memory across team members
shared_memory = MCALMemory(mcal, user_id="team_alpha")

coder = AssistantAgent("coder", model_client=model_client, memory=[shared_memory])
reviewer = AssistantAgent("reviewer", model_client=model_client, memory=[shared_memory])

team = RoundRobinGroupChat([coder, reviewer])
```

### Context Window Management

MCAL automatically manages context relevance:

```python
memory = MCALMemory(
    mcal,
    max_results=10,           # Limit results per query
    score_threshold=0.5,      # Minimum relevance score
)

# update_context adds relevant memories to the agent's context
result = await memory.update_context(model_context)
```

## API Reference

### MCALMemory

```python
class MCALMemory(Memory):
    def __init__(
        self,
        mcal: MCAL,
        user_id: str = "default",
        name: str = "mcal_memory",
        max_results: int = 10,
        score_threshold: float = 0.0,
        default_ttl_minutes: Optional[float] = None,
        enable_goal_tracking: bool = True,
        include_decisions: bool = True,
    ): ...
```

### Key Methods

| Method | Async | Description |
|--------|-------|-------------|
| `add(content)` | ✓ | Add `MemoryContent` to memory |
| `query(query)` | ✓ | Search for relevant memories, returns `MemoryQueryResult` |
| `update_context(model_context)` | ✓ | Update agent context with relevant memories |
| `clear()` | ✓ | Clear all memory entries |
| `close()` | ✓ | Cleanup resources |

### Helper Methods

| Method | Description |
|--------|-------------|
| `add_text(text, metadata=None)` | Convenience wrapper for adding plain text |
| `query_text(query)` | Convenience wrapper returning list of strings |
| `item_count` | Property returning number of stored items |
| `get_all_items()` | Return all non-expired memory items |

## Requirements

- Python >= 3.11
- mcal-ai >= 0.2.0
- autogen-core >= 0.4.0 (optional — gracefully degrades if not installed)
- autogen-agentchat >= 0.4.0 (optional)

## License

MIT License
