Metadata-Version: 2.4
Name: openai-resptools
Version: 0.1.0
Summary: lightweight tool-calling runner for the OpenAI Responses API.
Project-URL: Homepage, https://github.com/hoopengo/openai-resptools
Project-URL: Repository, https://github.com/hoopengo/openai-resptools
Project-URL: Issues, https://github.com/hoopengo/openai-resptools/issues
Author-email: hoopengo <hoopengo@yandex.ru>
License-File: LICENSE
Requires-Python: >=3.9
Requires-Dist: openai>=2.17.0
Requires-Dist: pydantic>=2.12.5
Description-Content-Type: text/markdown

<p align="center">
  <img src="thumbnail.png" alt="openai-resptools" width="608" />
</p>

<h1 align="center">openai-resptools</h1>

<p align="center">
  Lightweight tool-calling runner for the OpenAI Responses API.
</p>

## What it is

`openai-resptools` is a small Python helper library that:
- Registers Python callables as OpenAI function tools
- Generates OpenAI-compatible tool schemas from Python signatures
- Runs the Responses API tool-calling loop until the model returns final text

## Features

- Simple `ToolRegistry` with a `@registry.tool()` decorator
- Optional “registry as a class” pattern (public methods auto-registered)
- `ToolRunner` that round-trips model output items back into input
- Configurable `max_iters`, `max_retry`, `parallel_tool_calls`, and `on_event` callback

## Install

```bash
pip install openai-resptools
```

## Quickstart

Set your API key:

```bash
export OPENAI_API_KEY="..."
```

Then run a tool-calling loop:

```python
import os
from typing import Dict

from openai import OpenAI

from openai_resptools import ToolRegistry, ToolRunner

store: Dict[str, str] = {}

registry = ToolRegistry()


@registry.tool()
def write(key: str, text: str) -> dict:
    """Save text under a key and return a small status payload."""
    store[key] = text
    return {"ok": True, "key": key, "length": len(text)}


@registry.tool()
def read(key: str) -> dict:
    """Read text by key. Returns null text if missing."""
    return {"key": key, "text": store.get(key)}


client = OpenAI(api_key=os.environ["OPENAI_API_KEY"])

model = os.getenv("OPENAI_MODEL", "gpt-4o-mini")

runner = ToolRunner(
    client,
    model=model,
    registry=registry,
    max_iters=6,
    parallel_tool_calls=False,
)

prompt = (
    "Do the following steps using tools:\n"
    "1) Call write to save the text 'Hello from tools!' under key 'greeting'.\n"
    "2) Call read for key 'greeting'.\n"
    "3) Reply with ONLY the value of the text you read.\n"
)

result = runner.run(prompt)
print(result.text)
```

## Patterns

### Registry as a class

Subclass `ToolRegistry` and expose public methods as tools:

```python
from openai_resptools import ToolRegistry


class MyServiceTools(ToolRegistry):
    def __init__(self, db_conn: str) -> None:
        super().__init__()
        self.db_conn = db_conn

    def get_user_status(self, user_id: int) -> str:
        return f"User {user_id} is active (DB: {self.db_conn})"


service = MyServiceTools(db_conn="postgres://localhost:5432")
print(service.names())
```

### Manual dispatch (no runner)

You can generate tool schemas and call tools yourself:

```python
import json

from openai_resptools import ToolRegistry

registry = ToolRegistry()


@registry.tool()
def calculator(a: int, b: int, op: str) -> int:
    if op == "+":
        return a + b
    if op == "-":
        return a - b
    raise ValueError(f"Unsupported operation: {op!r}")


tools_payload = registry.as_openai_tools()

mock_call = {"name": "calculator", "arguments": '{"a": 10, "b": 5, "op": "+"}'}
args = json.loads(mock_call["arguments"])
output = registry.get(mock_call["name"]).call(args)
print(output)
```

## Development

```bash
pip install -e ".[dev]"
pytest
ruff check .
```

## License

Apache 2.0 License. See `LICENSE`.
