Metadata-Version: 2.4
Name: openai-agents-python-providers
Version: 1.0.0
Summary: Ollama and llama.cpp providers for the OpenAI Agents SDK
Project-URL: Homepage, https://github.com/GethosTheWalrus/openai-agents-python-providers
Project-URL: Repository, https://github.com/GethosTheWalrus/openai-agents-python-providers
Project-URL: Issues, https://github.com/GethosTheWalrus/openai-agents-python-providers/issues
Author-email: Mike Toscano <mike@miketoscano.com>
License-Expression: MIT
License-File: LICENSE
Keywords: agents,llama.cpp,llm,ollama,openai,providers
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Typing :: Typed
Requires-Python: >=3.10
Requires-Dist: openai-agents<1,>=0.16.0
Requires-Dist: python-dotenv>=1.0.0
Provides-Extra: temporal
Requires-Dist: httpx>=0.27.0; extra == 'temporal'
Requires-Dist: temporalio[openai-agents,opentelemetry]>=1.7.0; extra == 'temporal'
Description-Content-Type: text/markdown

# openai-agents-python-providers

Community model providers for the [OpenAI Agents SDK](https://github.com/openai/openai-agents-python).

Because OpenAI's SDK is intentionally focused on first-party integrations, this package provides ready-to-use `ModelProvider` implementations for locally-hosted and OpenAI-compatible backends:

| Provider | Backend |
|---|---|
| `OllamaProvider` | [Ollama](https://ollama.com/) |
| `LlamaCppProvider` | [llama.cpp](https://github.com/ggerganov/llama.cpp), [vLLM](https://github.com/vllm-project/vllm), and any OpenAI-compatible server |

## Installation

```bash
pip install openai-agents-python-providers

# or with temporal support
pip install "openai-agents-python-providers[temporal]"
```

## Quickstart

### Ollama

Make sure Ollama is running and you have a model pulled:

```python
import asyncio
import os
from agents import Agent, Runner, RunConfig
from openai_agents_providers import OllamaProvider

# Configure via environment or parameters
provider = OllamaProvider(
    model=os.getenv("MODEL_NAME", "llama3.2"),
    base_url=os.getenv("PROVIDER_URL", "http://localhost:11434/v1")
)

agent = Agent(
    name="Assistant",
    instructions="You are a helpful assistant.",
)

async def main():
    result = await Runner.run(
        agent,
        "What is the capital of France?",
        run_config=RunConfig(model_provider=provider),
    )
    print(result.final_output)

asyncio.run(main())
```

### llama.cpp

Start a llama.cpp server:

```bash
llama-server --model my-model.gguf --port 8080
```

```python
import asyncio
import os
from agents import Agent, Runner, RunConfig
from openai_agents_providers import LlamaCppProvider

provider = LlamaCppProvider(
    base_url=os.getenv("PROVIDER_URL", "http://localhost:8080/v1"),
    model=os.getenv("MODEL_NAME"),  # optional
    api_key="sk-anything",
)

agent = Agent(
    name="Assistant",
    instructions="You are a helpful assistant.",
)

async def main():
    result = await Runner.run(
        agent,
        "Explain quantum entanglement in one sentence.",
        run_config=RunConfig(model_provider=provider),
    )
    print(result.final_output)

asyncio.run(main())
```

### Temporal Integration

This package works seamlessly with the [Temporal OpenAI Agents Plugin](https://github.com/temporalio/sdk-python/tree/main/temporalio/contrib/openai_agents). You can use local providers like `OllamaProvider` or `LlamaCppProvider` while running agents durably in Temporal workflows.

See [examples/temporal/](examples/temporal/) for a complete "tool-as-activity" demonstration.

```bash
# Install temporal dependencies
uv sync --group temporal

# Start the worker (pointing to your infrastructure)
TEMPORAL_ADDRESS="temporal.example.com:7233" \
PROVIDER_TYPE="ollama" \
MODEL_NAME="llama3.2" \
uv run examples/temporal/worker.py

# Start the workflow
TEMPORAL_ADDRESS="temporal.example.com:7233" \
uv run examples/temporal/starter.py "What is the weather where I am?"
```

## API Reference

### `OllamaProvider`

```python
OllamaProvider(
    *,
    base_url: str = "http://localhost:11434/v1",
    model: str | None = None,
    api_key: str = "ollama",
    **kwargs,          # forwarded to AsyncOpenAI
)
```

| Parameter | Default | Description |
|---|---|---|
| `base_url` | `http://localhost:11434/v1` | Ollama API base URL |
| `model` | `None` | Model name (e.g. `"llama3.2"`, `"qwen3:8b"`). Overrides any name passed by the agent. |
| `api_key` | `"ollama"` | Ignored by Ollama; required by the OpenAI SDK. |

### `LlamaCppProvider`

```python
LlamaCppProvider(
    *,
    base_url: str,
    model: str | None = None,
    api_key: str = "sk-anything",
    **kwargs,          # forwarded to AsyncOpenAI
)
```

| Parameter | Default | Description |
|---|---|---|
| `base_url` | *(required)* | OpenAI-compatible API base URL, e.g. `http://localhost:8080/v1`. |
| `model` | `None` | Model name. Overrides any name passed by the agent. |
| `api_key` | `"sk-anything"` | Ignored by most backends; required by the OpenAI SDK. |

## License

MIT
