Metadata-Version: 2.4
Name: pycelest
Version: 0.1.2
Summary: A production-ready framework for building safe, capable, and observable AI agents.
Project-URL: Homepage, https://github.com/eddycelestin15/celest
Project-URL: Documentation, https://github.com/eddycelestin15/celest#readme
Project-URL: Repository, https://github.com/eddycelestin15/celest
Project-URL: Issues, https://github.com/eddycelestin15/celest/issues
Author-email: Celestin <eddy.celestin.raf@gmail.com>
License: MIT
Keywords: agents,ai,framework,llm,orchestration,react
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.11
Requires-Dist: anyio>=4.0
Requires-Dist: httpx>=0.27
Requires-Dist: opentelemetry-api>=1.24
Requires-Dist: opentelemetry-sdk>=1.24
Requires-Dist: pydantic>=2.0
Requires-Dist: pyyaml>=6.0
Requires-Dist: rich>=13.0
Requires-Dist: structlog>=24.0
Requires-Dist: tenacity>=8.0
Requires-Dist: tiktoken>=0.7
Requires-Dist: toml>=0.10
Provides-Extra: all
Requires-Dist: anthropic>=0.28; extra == 'all'
Requires-Dist: chromadb>=0.5; extra == 'all'
Requires-Dist: mcp>=1.0; extra == 'all'
Requires-Dist: openai>=1.30; extra == 'all'
Requires-Dist: opentelemetry-exporter-otlp>=1.24; extra == 'all'
Requires-Dist: opentelemetry-instrumentation-httpx>=0.45b0; extra == 'all'
Requires-Dist: sentence-transformers>=3.0; extra == 'all'
Provides-Extra: anthropic
Requires-Dist: anthropic>=0.28; extra == 'anthropic'
Provides-Extra: deepseek
Requires-Dist: openai>=1.30; extra == 'deepseek'
Provides-Extra: dev
Requires-Dist: httpx[cli]; extra == 'dev'
Requires-Dist: mypy>=1.10; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.23; extra == 'dev'
Requires-Dist: pytest-cov>=5.0; extra == 'dev'
Requires-Dist: pytest>=8.0; extra == 'dev'
Requires-Dist: ruff>=0.4; extra == 'dev'
Requires-Dist: types-pyyaml; extra == 'dev'
Requires-Dist: types-toml; extra == 'dev'
Provides-Extra: google
Requires-Dist: openai>=1.30; extra == 'google'
Provides-Extra: grok
Requires-Dist: openai>=1.30; extra == 'grok'
Provides-Extra: mcp
Requires-Dist: mcp>=1.0; extra == 'mcp'
Provides-Extra: mistral
Requires-Dist: openai>=1.30; extra == 'mistral'
Provides-Extra: openai
Requires-Dist: openai>=1.30; extra == 'openai'
Provides-Extra: otel
Requires-Dist: opentelemetry-exporter-otlp>=1.24; extra == 'otel'
Requires-Dist: opentelemetry-instrumentation-httpx>=0.45b0; extra == 'otel'
Provides-Extra: rag
Requires-Dist: chromadb>=0.5; extra == 'rag'
Requires-Dist: sentence-transformers>=3.0; extra == 'rag'
Description-Content-Type: text/markdown

# 🌟 Pycelest — AI Agent Framework

> **Build · Orchestrate · Defend**
> A production-ready framework for building safe, capable, and observable AI agents.

[![PyPI version](https://badge.fury.io/py/pycelest.svg)](https://badge.fury.io/py/pycelest)
[![Python 3.11+](https://img.shields.io/badge/python-3.11+-blue.svg)](https://www.python.org/downloads/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![Code style: ruff](https://img.shields.io/badge/code%20style-ruff-000000.svg)](https://github.com/astral-sh/ruff)

---

## What is Pycelest?

Pycelest is a Python framework for building AI agents that are **safe**, **capable**, and **observable** — designed from day one for production deployment, not just research.

Unlike frameworks that bolt guardrails on as an afterthought, Pycelest puts security, traceability, and reliability at the center of its architecture.

---

## Features

| Feature | Description |
|---|---|
| 🔁 **ReAct Loop** | Native Reasoning + Acting loop at the core |
| 🌐 **Multi-Provider** | OpenAI, Anthropic, Google, Mistral, DeepSeek, Grok — one interface |
| 🏠 **Local Models** | Ollama, LM Studio, vLLM — no API key needed |
| 🧠 **Memory** | STM, Scratchpad, RAG, and automatic compression |
| 🛡️ **Guardrails** | Built-in Tool Firewall and Execution Budget |
| 👥 **Multi-Agent** | Native agent collaboration via AgentBus |
| 📡 **Streaming** | First-class async streaming from all providers |
| 🔭 **Observability** | OpenTelemetry traces, metrics, and structured logs |
| 🔌 **Plugin System** | Lifecycle hooks for extending behavior |
| ⚙️ **YAML Config** | Code or YAML — your choice |
| 🔧 **MCP Support** | Connect external tool servers via Model Context Protocol |

---

## Installation

```bash
pip install pycelest

# With specific provider
pip install pycelest[openai]      # OpenAI, Google, DeepSeek, Grok, Mistral
pip install pycelest[anthropic]   # Anthropic Claude

# Local models — no API key needed
# Install Ollama from https://ollama.com, then:
# ollama pull llama3.2

# Everything
pip install pycelest[all]
```

---

## Quick Start

```python
from celest import SessionManager, SessionConfig
from celest.providers import OpenAIAdapter

config = SessionConfig(
    system_prompt="You are a helpful, careful AI agent",
    max_iterations=8,
    max_tool_executions=10,
    token_budget=12_000,
)

session = SessionManager(
    config=config,
    provider=OpenAIAdapter(model="gpt-4o"),
)

result = await session.run("Plan a 3-day trip to Kyoto")
print(result.response)
```

---

## Local Models (no API key)

```python
from celest.providers import OllamaAdapter

session = SessionManager(
    config=config,
    provider=OllamaAdapter(model="llama3.2"),  # or mistral, phi3, qwen2.5...
)
```

---

## With Tools

```python
from celest.tools import FunctionTool

@FunctionTool.register(description="Search the web for current information")
async def web_search(query: str) -> str:
    # your implementation
    ...

session = SessionManager(config=config, provider=provider, tools=[web_search])
result = await session.run("What are the latest AI news?")
```

---

## With YAML Config

```yaml
# celest.yaml
system_prompt: "You are a helpful AI agent"
max_iterations: 8
provider: openai      # openai | anthropic | ollama | lmstudio | deepseek | grok | mistral
model: gpt-4o
guardrails:
  tool_firewall: ask  # accept | deny | ask
```

```python
from celest import SessionManager

session = SessionManager.from_yaml("celest.yaml")
result = await session.run("Your task here")
```

---

## CLI

```bash
celest init                          # Generate a starter celest.yaml
celest run celest.yaml "Your prompt" # Run an agent from config
```

---

## Multi-Agent

```python
from celest.multi import AgentBus

bus = AgentBus()
researcher = SessionManager(config=research_config, provider=provider, bus=bus)
writer = SessionManager(config=write_config, provider=provider, bus=bus)

result = await researcher.run("Research and write a report on AI trends")
```

---

## Architecture

```
User Input
    │
    ▼
[Optional: PlanningModule] ──► Goal decomposition + SkillRegistry
    │
    ▼
┌─────────────────────────────────────┐
│         SessionManager              │
│         (ReAct Loop)                │
│                                     │
│  ConversationHistory  MemoryManager │
│  ToolRegistry         RAGAdapter    │
│  PlanningModule       Logger        │
│  Compression          ExecBudget    │
└──────────────┬──────────────────────┘
               │
       ┌───────┴────────┐
       ▼                ▼
  ProviderAdapter   ToolFirewall
  (LLM API)         (Guardrails)
```

---

## Roadmap

- [x] Project scaffold & specification
- [x] **Phase 1** — Core: SessionManager, ReAct loop, ProviderAdapters, FunctionTool, Guardrails
- [x] **Phase 2** — Memory: STM, Scratchpad, RAG, Compression, Streaming, OpenTelemetry
- [x] **Phase 3** — Advanced: Plugin system, AgentBus, PlanningModule, MCP, CLI, Local models

---

## Contributing

Contributions are welcome! Please open an issue or submit a PR on [GitHub](https://github.com/eddycelestin15/celest).

---

## License

MIT © Celestin
