Metadata-Version: 2.4
Name: anoman-ai
Version: 0.1.0
Summary: Official Python SDK for the Anoman AI LLM gateway — secure every AI call
Project-URL: Homepage, https://anoman.io
Project-URL: Documentation, https://docs.anoman.io
Project-URL: Repository, https://github.com/anoman-io/anoman-core
Author-email: Anoman AI <hello@anoman.io>
License-Expression: MIT
Keywords: ai,anoman,gateway,llm,openai,sdk
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries
Classifier: Typing :: Typed
Requires-Python: >=3.9
Requires-Dist: httpx>=0.27
Requires-Dist: pydantic>=2.6
Provides-Extra: dev
Requires-Dist: pytest-asyncio>=0.23; extra == 'dev'
Requires-Dist: pytest>=8.0; extra == 'dev'
Requires-Dist: respx>=0.21; extra == 'dev'
Description-Content-Type: text/markdown

# anoman-ai

Official Python SDK for the [Anoman AI](https://anoman.io) gateway. Async-first, OpenAI-compatible, with batch polling and streaming support.

## Installation

```bash
pip install anoman-ai
```

## Quick Start

```python
from anoman_ai import AnomanAI

async with AnomanAI(api_key="anm-sk-your-key-here") as client:
    response = await client.chat_completion(
        model="claude-sonnet",
        messages=[{"role": "user", "content": "Explain quantum computing in one sentence."}],
    )
    print(response.choices[0].message.content)
```

## Streaming

```python
async for chunk in client.chat_completion_stream(
    model="claude-sonnet",
    messages=[{"role": "user", "content": "Write a haiku about AI safety."}],
):
    print(chunk.choices[0].delta.content, end="")
```

## Batch Routing

Anoman can batch-route non-interactive requests for lower cost:

```python
# If the gateway returns a 202, you get a batch job
# Use poll_batch() to wait for the result
result = await client.poll_batch("job_abc123", max_wait_seconds=300)
print(result.result.choices[0].message.content)
print(f"Saved ${result.savings_usd}")
```

## Embeddings

```python
response = await client.create_embedding(
    model="text-embedding-3-small",
    input="Hello world",
)
print(response.data[0].embedding)
```

## Models

```python
models = await client.list_models()
for model in models.data:
    print(f"{model.id} ({model.owned_by})")
```

## Error Handling

```python
import httpx

try:
    response = await client.chat_completion(
        model="claude-sonnet",
        messages=[{"role": "user", "content": "Hello"}],
    )
except httpx.HTTPStatusError as e:
    if e.response.status_code == 401:
        print("Invalid API key")
    elif e.response.status_code == 403:
        print("Blocked by guardrail")
    elif e.response.status_code == 429:
        print("Rate limited")
```

## Configuration

```python
client = AnomanAI(
    api_key="anm-sk-...",
    base_url="https://api.anoman.io",  # default
    timeout=60.0,                       # 60s default
)
```

## Documentation

Full API documentation: [https://docs.anoman.io](https://docs.anoman.io)
