Metadata-Version: 2.4
Name: agentfoundry
Version: 1.4.28
Summary: AgentFoundry: A modular autonomous AI agent framework
Author-email: Chris Steel <csteel@syntheticore.com>
License-Expression: LicenseRef-Proprietary
Classifier: Programming Language :: Python :: 3.11
Classifier: Operating System :: OS Independent
Requires-Python: >=3.11
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: langchain>=0.1.0
Requires-Dist: langchain_community>=0.1.0
Requires-Dist: langchain_core>=0.1.0
Requires-Dist: langchain_google_genai>=0.1.0
Requires-Dist: langchain-milvus>=0.1.0
Requires-Dist: langchain_ollama>=0.1.0
Requires-Dist: langchain_openai>=0.1.0
Requires-Dist: langchain_xai>=0.1.0
Requires-Dist: langgraph>=0.1.0
Requires-Dist: openai>=1.0.0
Requires-Dist: duckdb>=0.9.0
Requires-Dist: beautifulsoup4>=4.12.0
Requires-Dist: pdfkit>=1.0.0
Requires-Dist: pypdf>=3.0.0
Requires-Dist: pandas>=1.5.0
Requires-Dist: pyodbc>=4.0.0
Requires-Dist: adbc_driver_manager>=0.8.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: requests>=2.31.0
Requires-Dist: jinja2>=3.1.0
Requires-Dist: markdown>=3.4.0
Requires-Dist: markdown-it-py<4.0.0,>=1.0.0
Requires-Dist: cryptography>=41.0.0
Requires-Dist: google_search_results>=2.4.0
Requires-Dist: toml>=0.10.0
Requires-Dist: idna
Requires-Dist: importlib_metadata>=6.8.0
Requires-Dist: packaging
Requires-Dist: pygments
Requires-Dist: requests
Requires-Dist: requests_toolbelt
Requires-Dist: pytz>=2023.3
Requires-Dist: setuptools>=68.0.0
Requires-Dist: typing_extensions>=4.7.0
Requires-Dist: wheel
Requires-Dist: awscli
Requires-Dist: fastapi>=0.110.0
Requires-Dist: uvicorn[standard]>=0.23.0
Dynamic: license-file
Dynamic: requires-dist

# AIgent

**AIgent** is a modular, extensible AI framework designed to support the construction and orchestration of autonomous agents across a variety of complex tasks. The system is built in Python and leverages modern AI tooling to integrate large language models (LLMs), vector stores, rule-based decision logic, and dynamic tool discovery in secure and performance-conscious environments.

## Features

- Modular agent architecture with support for specialization (e.g., memory agents, reactive agents, compliance agents)
- Cython-compiled backend for performance and IP protection
- Integration with popular frameworks such as LangChain, ChromaDB, and OpenAI
- Support for licensed or embedded deployments via license file verification or compiled-only distribution
- Configurable with runtime enforcement of execution licenses (RSA-signed, machine-bound)
- Fail-fast initialization with eager backend verification (LLM ping, vector store connectivity, KGraph health)
- Comprehensive structured logging with INFO-level startup diagnostics and DEBUG-level per-request tracing

## Use Cases

AIgent is designed to serve as a core intelligence engine for:

- Secure enterprise AI platforms (e.g., QuantumDrive)
- Compliance monitoring and rule-based alerting systems
- Conversational interfaces with dynamic tool execution
- Embedded agents in SaaS and on-premise environments

## Requirements

- Python 3.11+
- Cython
- Compatible dependencies (see `requirements.txt`)

## Configuration

AgentFoundry supports two configuration paths. The recommended approach is explicit configuration with `AgentConfig`.

### Explicit config (recommended)

```python
from agentfoundry.utils.agent_config import AgentConfig
from agentfoundry.registry.tool_registry import ToolRegistry
from agentfoundry.agents.orchestrator import Orchestrator

config = AgentConfig.from_dict({
    "AF_LLM_PROVIDER": "openai",
    "AF_OPENAI_API_KEY": "sk-...",
    "AF_OPENAI_MODEL": "gpt-4o",
    "AF_VECTORSTORE_PROVIDER": "chroma",
    "AF_CHROMA_URL": "http://localhost:8000",
})

registry = ToolRegistry(config=config)
registry.load_tools_from_directory()
orchestrator = Orchestrator(registry, config=config)
```

### Legacy config (backward compatibility)

Set a config file explicitly and/or use environment variables:

```bash
export AGENTFOUNDRY_CONFIG_FILE="$HOME/.config/agentfoundry/agentfoundry.toml"
export OPENAI_API_KEY="sk-..."
```

```python
from agentfoundry.utils.agent_config import AgentConfig
config = AgentConfig.from_legacy_config()
```

See `docs/Configuration_Guide.md` for full key reference and precedence rules.

### Provider notes

- LLM provider is selected by `LLM_PROVIDER` (`openai`, `ollama`, `grok`, `gemini`). OpenAI requires `OPENAI_API_KEY` when selected.
- Vector store provider is selected by `VECTORSTORE_PROVIDER` (`milvus`, `chroma`, `faiss`).
  - **Milvus**: set `MILVUS_URI` or `MILVUS_HOST` + `MILVUS_PORT`.
  - **Chroma**: set `CHROMA_URL` (remote) or `CHROMADB_PERSIST_DIR` (local).
  - **FAISS**: requires an existing index at `FAISS_INDEX_PATH`.
- ThreadMemory uses OpenAI embeddings by default but falls back to deterministic hash embeddings if `AF_DISABLE_OPENAI_EMBEDDINGS=1` or no API key is present.
- The DuckDB KGraph backend requires `duckdb` and its ADBC drivers.

## Author

**Christopher Steel**  
AI Practice Lead, AlphaSix Corporation  
Founder, Syntheticore, Inc.  
Email: `csteel@syntheticore.com`

## Licensing and Legal Notice

© Syntheticore, Inc. All rights reserved.

> **This software is proprietary and confidential.**  
> Any use, reproduction, modification, distribution, or commercial deployment of AIgent or any part thereof requires **explicit written authorization** from Syntheticore, Inc.

Unauthorized use is strictly prohibited and may result in legal action.

---

For licensing inquiries or permission to use this software, please contact:  
📧 **csteel@syntheticore.com**

## Gradio Chat Interface

A simple Gradio-based chat interface for interacting with the HybridOrchestrator agent.

### Prerequisites

- Ensure you have credentials for your selected LLM provider. For OpenAI:

```bash
export OPENAI_API_KEY=<your_api_key>
```

### Running the App

```bash
python gradio_app.py
```

The interface will be available at http://localhost:7860 by default.

## API Server

Genie can be accessed programmatically via a FastAPI‑based HTTP API. Two main endpoints are provided:

- **POST /v1/chat**: Send or continue a multi-turn conversation. Accepts JSON payload with conversation history and returns the assistant reply and updated history.
- **POST /v1/orchestrate**: Discover APIs and execute a main task across all agents. Returns aggregated results.
- **POST /v1/cancel**: Cancel an in-flight request by `user_id` and `thread_id`.
- **GET /health**: Health check endpoint.

If a backend is unreachable at startup and `FAIL_FAST` is enabled (the default), the server returns **503 Service Unavailable** with a JSON error body.

### Prerequisites

- Ensure you have credentials for your selected LLM provider. For OpenAI:

```bash
export OPENAI_API_KEY=<your_api_key>
```
- Install FastAPI and Uvicorn (if not already):

```bash
pip install fastapi uvicorn[standard]
```

### Running the API

```bash
python api_server.py
# Or with auto‑reload during development:
uvicorn api_server:app --reload --host 0.0.0.0 --port 8000
```

Interactive API docs will be available at http://localhost:8000/docs

- For Microsoft Graph access (entra_tool), forward the SPA's bearer token in the `Authorization: Bearer <token>` header; the API server injects it into the orchestrator config as `entra_user_assertion` for on-behalf-of token exchange.

## Fail-Fast Initialization

By default, the Orchestrator verifies all backends (LLM, vector store, knowledge graph) at startup. If any backend is unreachable, a `FatalInitializationError` is raised. This exception inherits from `BaseException` (not `Exception`), so it escapes generic `except Exception` handlers and kills the process — ensuring broken deployments are caught immediately.

To allow degraded startup (e.g. for development without all backends available), set `FAIL_FAST` to `false`:

```python
config = AgentConfig.from_dict({
    "AF_LLM_PROVIDER": "openai",
    "AF_OPENAI_API_KEY": "sk-...",
    "AF_FAIL_FAST": "false",  # warn on backend failures instead of crashing
})
```

Or via environment variable:

```bash
export AF_FAIL_FAST=false
```

To catch the error explicitly in your application:

```python
from agentfoundry.utils.exceptions import FatalInitializationError

try:
    orchestrator = Orchestrator(registry, config=config)
except FatalInitializationError as exc:
    logger.critical("Backends unavailable: %s", exc)
    # Start in limited mode or exit
```

The LLM provider is verified with a lightweight ping (`llm.invoke([HumanMessage("ping")])`) to catch invalid API keys at startup rather than on the first user request. Vector store providers call `verify_connectivity()` to eagerly test the backend connection.

See `docs/Configuration_Guide.md` for the full `FAIL_FAST` reference.

## Logging & Debugging

AgentFoundry uses standard Python `logging` throughout. Every module uses `logging.getLogger(__name__)` for hierarchical logger naming. If the host application does not configure logging, `agentfoundry.utils.logger.get_logger()` will create a default log file at `./logs/agentforge.log`.

### Logging strategy

- **INFO level** — Startup and initialization events: backend connections, LLM ping results, config loading, tool registration summaries, and warm-up status. In production, `INFO` gives visibility that everything started correctly.
- **DEBUG level** — Per-request operations: similarity searches, cache hits/misses, tool calls, LangGraph invocations, timing details. Enable for troubleshooting.
- **Timing measurements** — Critical operations (LLM invoke, vector store queries, architect planning) are timed with `time.perf_counter()` and logged with durations in milliseconds.

### Controlling log level

Set `LOG_LEVEL` in your config:

```python
config = AgentConfig.from_dict({
    "AF_LOG_LEVEL": "DEBUG",
    # ...
})
```

Or configure logging directly:

```python
from agentfoundry.utils.logger import setup_logging

setup_logging(level="INFO", logfile="agentfoundry.log")
```

## Quick Smoke Test (Chroma, local persistence)

This verifies vector search without external APIs:

```
export VECTORSTORE_PROVIDER=chroma
export CHROMADB_PERSIST_DIR="$(mktemp -d)"
python - <<'PY'
from agentfoundry.vectorstores.factory import VectorStoreFactory
vs = VectorStoreFactory.get_store(org_id='smoke')
vs.add_texts(["hello world"], metadatas=[{"org_id":"smoke"}], ids=["1"])
hits = vs.similarity_search("hello", k=1, filter={"org_id":"smoke"})
print("Hits:", [h.page_content for h in hits])
PY
```

Expected: `Hits: ['hello world']`.

Notes:
- ThreadMemory falls back to hash embeddings if OpenAI embeddings are unavailable.
- FAISS provider raises if `FAISS_INDEX_PATH` does not exist; initialize with your ingestion tooling.

