Metadata-Version: 2.4
Name: agentfoundry
Version: 1.5.43
Summary: AgentFoundry: A modular autonomous AI agent framework
Author-email: Chris Steel <csteel@syntheticore.com>
License-Expression: LicenseRef-Proprietary
Classifier: Programming Language :: Python :: 3.11
Classifier: Operating System :: OS Independent
Requires-Python: >=3.11
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: langchain>=1.2.0
Requires-Dist: langchain_community>=0.4.0
Requires-Dist: langchain_core>=1.3.0
Requires-Dist: langchain_google_genai>=4.2.0
Requires-Dist: langchain-milvus>=0.3.0
Requires-Dist: langchain_ollama>=1.1.0
Requires-Dist: langchain-openai>=1.1.14
Requires-Dist: langchain_xai>=1.2.0
Requires-Dist: langgraph>=1.1.0
Requires-Dist: langgraph.checkpoint.sqlite>=3.0.0
Requires-Dist: openai>=2.32.0
Requires-Dist: duckdb>=1.5.0
Requires-Dist: neo4j>=6.1.0
Requires-Dist: paramiko>=4.0.0
Requires-Dist: boto3>=1.42.0
Requires-Dist: sshtunnel>=0.4.0
Requires-Dist: psycopg2-binary>=2.9.0
Requires-Dist: beautifulsoup4>=4.14.0
Requires-Dist: pdfkit>=1.0.0
Requires-Dist: pypdf>=6.10.0
Requires-Dist: pandas>=3.0.0
Requires-Dist: pyodbc>=5.3.0
Requires-Dist: adbc_driver_manager>=1.11.0
Requires-Dist: faiss-cpu>=1.13.0
Requires-Dist: pydantic>=2.13.0
Requires-Dist: requests>=2.33.0
Requires-Dist: jinja2>=3.1.0
Requires-Dist: markdown>=3.10.0
Requires-Dist: markdown-it-py<4.0.0,>=3.0.0
Requires-Dist: cryptography>=46.0.0
Requires-Dist: pycryptodome>=3.23.0
Requires-Dist: pqcrypto>=0.4.0
Requires-Dist: mcp>=1.27.0
Requires-Dist: google_search_results>=2.4.0
Requires-Dist: toml>=0.10.0
Requires-Dist: idna>=3.11
Requires-Dist: importlib_metadata>=9.0.0
Requires-Dist: packaging>=26.0
Requires-Dist: pygments>=2.20.0
Requires-Dist: requests_toolbelt>=1.0.0
Requires-Dist: pytz>=2026.1
Requires-Dist: setuptools>=82.0.0
Requires-Dist: typing_extensions>=4.15.0
Requires-Dist: wheel>=0.46.0
Requires-Dist: awscli>=1.44.0
Dynamic: license-file
Dynamic: requires-dist

# AgentFoundry

**AgentFoundry** is a modular, extensible AI library designed to support the construction and orchestration of autonomous agents across a variety of complex tasks. The system is built in Python and leverages modern AI tooling to integrate large language models (LLMs), vector stores, rule-based decision logic, and dynamic tool discovery in secure and performance-conscious environments.

## Features

- Modular agent architecture with support for specialization (e.g., memory agents, reactive agents, compliance agents)
- Cython-compiled backend for performance and IP protection
- Integration with popular frameworks such as LangChain, LangGraph, Milvus, and OpenAI
- Workflow lifecycle management with delivery waves (pause/resume/cancel/retry/replan)
- Support for licensed or embedded deployments via license file verification or compiled-only distribution
- Configurable with runtime enforcement of execution licenses (PQC-signed, optionally machine-bound)
- Fail-fast initialization with eager backend verification (LLM ping, vector store connectivity, KGraph health)
- Comprehensive structured logging with INFO-level startup diagnostics and DEBUG-level per-request tracing

## Use Cases

AgentFoundry is designed to be embedded as a core intelligence engine in:

- Secure enterprise AI platforms
- Compliance monitoring and rule-based alerting systems
- Applications with dynamic tool execution
- SaaS and on-premise environments

## Requirements

- Python 3.11+
- Cython
- Compatible dependencies (see `requirements.txt`)

## Quick Start

```python
from agentfoundry.utils.agent_config import AgentConfig
from agentfoundry.registry.tool_registry import ToolRegistry
from agentfoundry.agents.orchestrator import Orchestrator

config = AgentConfig.from_dict({
    "AF_LLM_PROVIDER": "openai",
    "AF_OPENAI_API_KEY": "sk-...",
    "AF_OPENAI_MODEL": "gpt-5.4",
    "AF_VECTORSTORE_PROVIDER": "milvus",
    "AF_MILVUS_URI": "http://localhost:19530",
})

registry = ToolRegistry(config=config)
registry.load_tools_from_directory()
orchestrator = Orchestrator(registry, config=config)

# Run a task
result = orchestrator.run_task("Summarize recent activity", config={
    "configurable": {"user_id": "u1", "thread_id": "t1", "org_id": "myorg"}
})
```

## Configuration

AgentFoundry supports two configuration paths. The recommended approach is explicit configuration with `AgentConfig`.

### Explicit config (recommended)

```python
from agentfoundry.utils.agent_config import AgentConfig

config = AgentConfig.from_dict({
    "AF_LLM_PROVIDER": "openai",
    "AF_OPENAI_API_KEY": "sk-...",
    "AF_OPENAI_MODEL": "gpt-5.4",
    "AF_VECTORSTORE_PROVIDER": "milvus",
    "AF_MILVUS_URI": "http://localhost:19530",
})
```

### Legacy config (backward compatibility)

Set a config file explicitly and/or use environment variables:

```bash
export AGENTFOUNDRY_CONFIG_FILE="$HOME/.config/agentfoundry/agentfoundry.toml"
export OPENAI_API_KEY="sk-..."
```

```python
from agentfoundry.utils.agent_config import AgentConfig
config = AgentConfig.from_legacy_config()
```

See `docs/Configuration_Guide.md` for full key reference and precedence rules.

### Provider notes

- LLM provider is selected by `LLM_PROVIDER` (`openai`, `ollama`, `grok`, `gemini`). OpenAI requires `OPENAI_API_KEY` when selected.
- Vector store provider is selected by `VECTORSTORE_PROVIDER` (`milvus`, `faiss`).
  - **Milvus**: set `MILVUS_URI` or `MILVUS_HOST` + `MILVUS_PORT`.
  - **FAISS**: requires an existing index at `FAISS_INDEX_PATH`.
- ThreadMemory uses OpenAI embeddings by default but falls back to deterministic hash embeddings if `AF_DISABLE_OPENAI_EMBEDDINGS=1` or no API key is present.
- The DuckDB KGraph backend requires `duckdb` and its ADBC drivers.

## Workflow Lifecycle

AgentFoundry includes a workflow engine that organises execution plans into dependency-ordered delivery waves. See `docs/Workflow_Lifecycle_Guide.md` for full documentation.

```python
from agentfoundry.agents.workflow import WorkflowManager, build_waves

manager = WorkflowManager(task_executor=my_executor)
wf = manager.create_workflow(plan, config=config)
manager.run(wf.workflow_id)        # execute all waves
manager.pause(wf.workflow_id)      # pause between waves
manager.resume(wf.workflow_id)     # continue from pause
manager.retry(wf.workflow_id)      # retry failed tasks
manager.cancel(wf.workflow_id)     # cancel and skip remaining
manager.replan(wf.workflow_id)     # regenerate plan from current state
```

## Fail-Fast Initialization

By default, the Orchestrator verifies all backends (LLM, vector store, knowledge graph) at startup. If any backend is unreachable, a `FatalInitializationError` is raised. This exception inherits from `BaseException` (not `Exception`), so it escapes generic `except Exception` handlers — ensuring broken deployments are caught immediately.

To allow degraded startup (e.g. for development without all backends available):

```python
config = AgentConfig.from_dict({
    "AF_LLM_PROVIDER": "openai",
    "AF_OPENAI_API_KEY": "sk-...",
    "AF_FAIL_FAST": "false",  # warn on backend failures instead of crashing
})
```

To catch the error explicitly:

```python
from agentfoundry.utils.exceptions import FatalInitializationError

try:
    orchestrator = Orchestrator(registry, config=config)
except FatalInitializationError as exc:
    logger.critical("Backends unavailable: %s", exc)
```

See `docs/Configuration_Guide.md` for the full `FAIL_FAST` reference.

## Logging & Debugging

AgentFoundry uses standard Python `logging` throughout. Every module uses `logging.getLogger(__name__)` for hierarchical logger naming.

### Logging strategy

- **INFO level** — Startup and initialization events: backend connections, LLM ping results, config loading, tool registration summaries, and warm-up status.
- **DEBUG level** — Per-request operations: similarity searches, cache hits/misses, tool calls, LangGraph invocations, timing details.
- **Timing measurements** — Critical operations (LLM invoke, vector store queries, architect planning) are timed and logged with durations in milliseconds.

### Controlling log level

```python
config = AgentConfig.from_dict({
    "AF_LOG_LEVEL": "DEBUG",
})
```

Or configure logging directly:

```python
from agentfoundry.utils.logger import setup_logging
setup_logging(level="INFO", logfile="agentfoundry.log")
```

## Notes

- ThreadMemory falls back to hash embeddings if OpenAI embeddings are unavailable.
- FAISS provider raises if `FAISS_INDEX_PATH` does not exist; initialize with your ingestion tooling.

## Author

**Christopher Steel**
AI Practice Lead, AlphaSix Corporation
Founder, Syntheticore, Inc.
Email: `csteel@syntheticore.com`

## Licensing and Legal Notice

© Syntheticore, Inc. All rights reserved.

> **This software is proprietary and confidential.**
> Any use, reproduction, modification, distribution, or commercial deployment of AgentFoundry or any part thereof requires **explicit written authorization** from Syntheticore, Inc.

Unauthorized use is strictly prohibited and may result in legal action.

---

For licensing inquiries or permission to use this software, please contact:
**csteel@syntheticore.com**
