Metadata-Version: 2.4
Name: palbot
Version: 0.1.0
Summary: A simple, lightweight, and reliable framework for building personal AI assistants.
Project-URL: Homepage, https://github.com/zhixiangxue/pal-ai
Project-URL: Repository, https://github.com/zhixiangxue/pal-ai
Project-URL: Issues, https://github.com/zhixiangxue/pal-ai/issues
Author: Xue
License: MIT
License-File: LICENSE
Keywords: agent,ai,assistant,llm,memory,slack
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Requires-Dist: aiohttp
Requires-Dist: fastapi
Requires-Dist: pydantic
Requires-Dist: python-dotenv
Requires-Dist: rich
Requires-Dist: slack-bolt
Requires-Dist: slackify-markdown
Requires-Dist: uvicorn
Provides-Extra: dev
Requires-Dist: hatch; extra == 'dev'
Requires-Dist: twine; extra == 'dev'
Description-Content-Type: text/markdown

<div align="center">

<img src="https://raw.githubusercontent.com/zhixiangxue/pal-ai/main/docs/assets/logo.png" alt="pal" width="120">

[![PyPI version](https://badge.fury.io/py/palbot.svg)](https://badge.fury.io/py/palbot)
[![Python Version](https://img.shields.io/pypi/pyversions/palbot)](https://pypi.org/project/palbot/)
[![License](https://img.shields.io/github/license/zhixiangxue/pal-ai)](LICENSE)

**Simple. Lightweight. Reliable.**

pal is a minimal framework for building personal AI assistants — connect a channel, give it tools and memory, and it's ready to work.

</div>

---

## Key Features

- **Minimal by design.** Agent + Pal + channel — three concepts, no more. The entire framework fits in your head.
- **Persistent out of the box.** Conversation history and long-term memory are saved to `~/.pal/` automatically. Restart anytime, pick up where you left off.
- **Memory is the agent's business.** The agent decides when to recall and what to note — no hard-coded pipelines, no mandatory hooks.
- **Pluggable everywhere.** Swap the LLM (18+ providers via chak), the memory backend, or the channel — each is an independent interface with a one-file implementation.

---

## Quick Start

### Installation

```bash
pip install palbot
```

### Minimal example — Slack bot in 20 lines

```python
import os
from pal import Pal, Agent
from pal.tools import Bash, Python, Web, Search
from pal.messaging.slack import Slack

agent = Agent(
    model_uri="openai/gpt-4o-mini",
    api_key=os.environ["OPENAI_API_KEY"],
    tools=[Bash(), Python(), Web(), Search()],
)

pal = Pal(
    agent=agent,
    channels=[Slack(
        bot_token=os.environ["SLACK_BOT_TOKEN"],
        app_token=os.environ["SLACK_APP_TOKEN"],
    )],
)
pal.run()
```

### Add long-term memory — 3 extra lines

```python
from pal.memory.seeka import SeekaMemory

mem = SeekaMemory(llm_uri="openai/gpt-4o-mini", llm_api_key=os.environ["OPENAI_API_KEY"])

agent = Agent(..., memory=mem)
```

That's it. The agent now remembers facts across sessions, recalls relevant context automatically, and stores everything in `~/.pal/memory` — no path configuration required.

---

## How It Works

```
User message
  → Pal receives it from the channel
  → Agent runs: LLM + tools (as many turns as needed)
  → Pal sends the reply
  → Conversation saved to ~/.pal/conversations/
  → Memory consolidated in the background (~/.pal/memory/)
```

**Agent** handles the reasoning loop — it calls the LLM, executes tools, and evaluates whether the task is complete. It keeps going until it is.

**Pal** is the runtime — it wires a channel to an agent, handles concurrency, and triggers background tasks (memory consolidation, conversation persistence) after each turn.

**Memory** is optional and pluggable. The agent decides when to recall and what to note — memory tools are injected automatically when a memory backend is configured.

---

## Memory

pal ships with a [seeka](https://github.com/zhixiangxue/seeka-ai) backend. Memory is split across two layers:

| Layer | What | Where |
|-------|------|-------|
| Conversation history | Raw LLM message list, persisted across restarts | `~/.pal/conversations/{agent_id}.json` |
| Long-term memory | Structured facts extracted from conversations, semantic recall | `~/.pal/memory/` |

Both layers are zero-config. The paths exist automatically on first use.

**The agent controls memory.** It calls `note()` to record facts and `recall()` to retrieve them. The runtime calls `dream()` in the background after each turn to consolidate raw notes into structured memories.

```python
# The agent sees these as tools and decides when to use them:
# note(content)  — save something worth remembering
# recall(query)  — search long-term memory before responding
```

To add a different memory backend, subclass `BaseMemory` and implement `note`, `recall`, and `process`.

---

## Channels

| Channel | Class | Notes |
|---------|-------|-------|
| Slack | `pal.messaging.slack.Slack` | Socket Mode, supports file attachments |

More channels coming. To add your own, subclass `BaseMessaging`.

---

## Agent

```python
Agent(
    model_uri="openai/gpt-4o-mini",   # any chak model URI
    api_key="sk-...",
    system_prompt="...",               # optional
    tools=[...],                       # any chak-compatible tools
    memory=SeekaMemory(...),           # optional
    max_turns=5,                       # max LLM iterations per task
    agent_id="default",                # used for conversation file naming
)
```

pal uses [chak](https://github.com/zhixiangxue/chak-ai) for LLM calls. Any model URI supported by chak works:

| URI | Provider |
|-----|----------|
| `openai/gpt-4o-mini` | OpenAI |
| `anthropic/claude-3-5-sonnet` | Anthropic |
| `google/gemini-1.5-pro` | Google Gemini |
| `bailian/qwen-plus` | Alibaba Bailian |
| `deepseek/deepseek-chat` | DeepSeek |
| `ollama/llama3.1` | Ollama (local) |
| `provider@https://base-url/model` | Any OpenAI-compatible endpoint |

---

## Tools

pal works with any tool supported by chak. The standard library (`chak.tools.std`) ships ready to use:

| Tool | What it does |
|------|-------------|
| `Bash` | Execute shell commands |
| `Python` | Run Python code snippets |
| `FileSystem` | Read, write, edit, list files |
| `Web` | Fetch and extract web page content |
| `Search` | Web search (Tavily → Brave → DuckDuckGo) |
| `Http` | Full HTTP client (GET / POST / PUT / PATCH / DELETE) |
| `Pdf` | Extract text and tables from PDF files |

```python
from pal.tools import Bash, Python, FileSystem, Web, Search, Http, Pdf

agent = Agent(..., tools=[Bash(), Python(), FileSystem(), Web(), Search(), Http(), Pdf()])
```

---

## Is pal right for you?

pal is a good fit if:

- You want a working Slack bot (or similar) with tools and memory, not a framework to study.
- You want the agent to own its reasoning — no hand-coded pipelines, no fixed workflows.
- You need to ship quickly and keep the codebase readable.

pal is not a good fit if you need multi-tenant deployments, complex routing between specialized agents, or production-grade observability out of the box.

---

<div align="right"><img src="https://raw.githubusercontent.com/zhixiangxue/pal-ai/main/docs/assets/logo.png" alt="pal" width="120"></div>
