Metadata-Version: 2.4
Name: redis-agent-kit
Version: 0.1.0
Summary: Reusable infrastructure for building AI agents with Redis
Project-URL: Homepage, https://github.com/redis-developer/redis-agent-kit
Project-URL: Documentation, https://github.com/redis-developer/redis-agent-kit#readme
Project-URL: Repository, https://github.com/redis-developer/redis-agent-kit
Project-URL: Issues, https://github.com/redis-developer/redis-agent-kit/issues
Author-email: Redis <oss@redis.com>
License: MIT
Keywords: agents,ai,llm,rag,redis
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.11
Requires-Dist: httpx
Requires-Dist: litellm
Requires-Dist: pydantic-settings
Requires-Dist: pydantic>=2.0.0
Requires-Dist: pydocket
Requires-Dist: python-ulid>=2.0.0
Requires-Dist: pyyaml
Requires-Dist: redis>=5.0.0
Requires-Dist: redisvl
Provides-Extra: all
Requires-Dist: click>=8.0.0; extra == 'all'
Requires-Dist: fastapi>=0.100.0; extra == 'all'
Requires-Dist: fastmcp>=3.0.0; extra == 'all'
Requires-Dist: nest-asyncio>=1.5.0; extra == 'all'
Requires-Dist: rich>=13.0.0; extra == 'all'
Requires-Dist: uvicorn>=0.20.0; extra == 'all'
Provides-Extra: api
Requires-Dist: fastapi>=0.100.0; extra == 'api'
Requires-Dist: uvicorn>=0.20.0; extra == 'api'
Provides-Extra: cli
Requires-Dist: click>=8.0.0; extra == 'cli'
Requires-Dist: nest-asyncio>=1.5.0; extra == 'cli'
Requires-Dist: rich>=13.0.0; extra == 'cli'
Provides-Extra: dev
Requires-Dist: codespell<3,>=2.4.1; extra == 'dev'
Requires-Dist: fastmcp>=3.0.0; extra == 'dev'
Requires-Dist: mypy>=1.0.0; extra == 'dev'
Requires-Dist: nest-asyncio>=1.5.0; extra == 'dev'
Requires-Dist: pre-commit>=4.0.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.23.0; extra == 'dev'
Requires-Dist: pytest-cov>=4.0.0; extra == 'dev'
Requires-Dist: pytest>=8.0.0; extra == 'dev'
Requires-Dist: ruff>=0.1.0; extra == 'dev'
Requires-Dist: testcontainers[redis]>=4.0.0; extra == 'dev'
Provides-Extra: examples
Requires-Dist: click>=8.0.0; extra == 'examples'
Requires-Dist: fastapi>=0.100.0; extra == 'examples'
Requires-Dist: fastmcp>=3.0.0; extra == 'examples'
Requires-Dist: langchain-openai>=0.2.0; extra == 'examples'
Requires-Dist: langgraph>=0.2.0; extra == 'examples'
Requires-Dist: nest-asyncio>=1.5.0; extra == 'examples'
Requires-Dist: openai-agents>=0.0.3; extra == 'examples'
Requires-Dist: python-dotenv>=1.0.0; extra == 'examples'
Requires-Dist: rich>=13.0.0; extra == 'examples'
Requires-Dist: uvicorn>=0.20.0; extra == 'examples'
Provides-Extra: mcp
Requires-Dist: fastmcp>=3.0.0; extra == 'mcp'
Description-Content-Type: text/markdown

# Redis Agent Kit

**Production-ready agent infrastructure with Redis.**

RAK separates agent execution from your API, giving you durable tasks that survive failures, workers that scale independently, and visibility into every step.

## Why RAK?

| Problem | RAK Solution |
|---------|--------------|
| Agent failures are silent and unrecoverable | Persistent task state with automatic retry |
| Scaling agents means scaling your whole API | Decouple APIs from agent workers — scale each independently |
| No visibility into long-running agent work | Real-time progress tracking and status |
| Agent needs human input mid-execution | Pause, collect input, resume seamlessly |
| Agents don't remember or learn | Use conversation history and long-term memory |

## Quick Start

```bash
pip install redis-agent-kit[all]
```

**1. Write your agent in any framework** — a simple async function:
```python
# agent.py
async def my_agent(task_id, thread_id, message, context):
    # Use any framework: OpenAI, LangChain, LangGraph, etc.
    return {"answer": f"Processed: {message}"}
```

**2. Wrap it with AgentKit**:
```python
# server.py
from redis_agent_kit import AgentKit
from redis_agent_kit.api import create_app
from agent import my_agent

kit = AgentKit(agent_callable=my_agent)  # Uses RAK_REDIS_URL or localhost:6379
app = create_app(kit=kit)
```

**3. Run worker + server**:
```bash
rak worker --name my_agent --tasks agent:my_agent  # Terminal 1
uvicorn server:app                                  # Terminal 2
```

**4. Invoke your agent**:
```bash
curl -X POST http://localhost:8000/tasks \
  -H "Content-Type: application/json" \
  -d '{"message": "What is Redis?"}'
# Returns: {"task_id": "...", "thread_id": "...", "status": "queued"}

curl http://localhost:8000/tasks/{task_id}
# Returns: {"status": "done", "result": {"answer": "..."}}
```

## How It Works

Your agent runs inside a **task**. Each task is one invocation of your agent with:
- **Status tracking** — queued → in_progress → done/failed
- **Progress updates** — emit messages as work happens
- **Result/error storage** — persist outcomes in Redis
- **Conversation context** — tasks belong to threads for multi-turn chat

## Memory

RAK provides conversation history and long-term memory:

```python
async def my_agent(ctx):
    # Add messages to conversation history
    await ctx.memory.add_message("user", ctx.message)

    # Get recent conversation
    messages = await ctx.memory.get_messages(limit=10)

    # Search long-term memories
    relevant = await ctx.memory.search("user preferences")

    # Explicitly store important information
    await ctx.memory.create_memory("User prefers dark mode")

    response = generate_response(messages, relevant)
    await ctx.memory.add_message("assistant", response)

    return {"response": response}
```

Memory is enabled by default. Disable with `RAK_MEMORY__ENABLED=false`. Results support multiple formats: `messages.markdown()`, `messages.json()`, or `messages.dict()`.

## Real-time streaming

Push task progress and LLM tokens to clients over Server-Sent Events, backed by Redis Pub/Sub:

```python
from redis_agent_kit import AgentKit, StreamConfig
from redis_agent_kit.api import create_app

stream_config = StreamConfig(enabled=True)
kit = AgentKit(agent_callable=my_agent, stream_config=stream_config)
app = create_app(kit=kit, stream_config=stream_config)
```

```js
const es = new EventSource(`/tasks/${taskId}/stream`);
es.addEventListener('token', (e) => process.stdout.write(JSON.parse(e.data).message));
es.addEventListener('done',  (e) => { console.log(JSON.parse(e.data).result); es.close(); });
```

Supports per-task, per-session, and global channel scopes. See the [Streaming guide](docs/guide/streaming.md) for token streaming, event filtering, and replay.

## Documentation

- **[Tutorial](docs/tutorial.md)** — Build a complete agent from scratch
- **[Tasks](docs/guide/tasks.md)** — Status, progress, results
- **[Threads](docs/guide/threads.md)** — Conversation management
- **[Memory](docs/guide/memory.md)** — Working and long-term memory
- **[Streaming](docs/guide/streaming.md)** — Real-time SSE and token streaming
- **[Middleware](docs/guide/middleware.md)** — RAG, thread history, auto-emit
- **[Protocols](docs/guide/protocols.md)** — A2A, ACP, MCP exposure
- **[Input Handling](docs/guide/input.md)** — Pause for user input
- **[Pipelines](docs/guide/pipelines.md)** — Ingest and vectorize content
- **[CLI](docs/guide/cli.md)** | **[API](docs/guide/api.md)** — Reference

## License

MIT

