Metadata-Version: 2.4
Name: agentmetrics
Version: 0.1.1
Summary: Real-time cost visibility & optimization for AI agents.
License: MIT
Project-URL: Homepage, https://agentmetrics.dev
Project-URL: Repository, https://github.com/agentmetrics/agentmetrics
Project-URL: Documentation, https://agentmetrics.dev/docs
Keywords: ai,agents,observability,cost,langchain,crewai,langgraph
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Requires-Python: >=3.8
Description-Content-Type: text/markdown
Requires-Dist: requests>=2.28

# AgentMetrics Python SDK

[![PyPI version](https://img.shields.io/pypi/v/agentmetrics)](https://pypi.org/project/agentmetrics/)
[![Python](https://img.shields.io/pypi/pyversions/agentmetrics)](https://pypi.org/project/agentmetrics/)
[![License](https://img.shields.io/badge/license-MIT-blue)](https://github.com/agentmetrics/agentmetrics/blob/main/LICENSE)

Real-time cost, latency, and error tracking for AI agents. One decorator. Zero overhead on failure.

```python
from agentmetrics import sentinel

sentinel.configure(api_key="am_xxxxxxxxxxxxxxxx")

@sentinel.track(agent_id="customer_support")
def my_agent(task: str) -> str:
    return call_llm(task)
```

Every call is now tracked — duration, status, errors — visible in your [AgentMetrics dashboard](https://agentmetrics.dev).

---

## Prerequisites

You need a running AgentMetrics instance:

- **Self-hosted** (free): [deploy in 60 seconds](https://github.com/agentmetrics/agentmetrics#self-host-in-60-seconds), then sign up at `http://localhost/signup`
- **Cloud**: sign up at [agentmetrics.dev](https://agentmetrics.dev)

After setup, get your SDK key from the dashboard: **Settings → SDK Keys**

---

## Install

```bash
pip install agentmetrics
```

---

## Quick start

```python
import os
from agentmetrics import sentinel

# Configure once at startup
sentinel.configure(api_key=os.environ["AGENTMETRICS_KEY"])

@sentinel.track(agent_id="customer_support")
def handle_ticket(ticket: str) -> str:
    response = openai_client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": ticket}],
    )
    return response.choices[0].message.content
```

---

## Framework examples

### OpenAI

```python
import os
from openai import OpenAI
from agentmetrics import sentinel

sentinel.configure(api_key=os.environ["AGENTMETRICS_KEY"])
client = OpenAI()

@sentinel.track(agent_id="openai_agent")
def ask(prompt: str) -> str:
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": prompt}],
    )
    return response.choices[0].message.content
```

### Anthropic

```python
import os
from anthropic import Anthropic
from agentmetrics import sentinel

sentinel.configure(api_key=os.environ["AGENTMETRICS_KEY"])
client = Anthropic()

@sentinel.track(agent_id="claude_agent")
def ask(prompt: str) -> str:
    message = client.messages.create(
        model="claude-sonnet-4-6",
        max_tokens=1024,
        messages=[{"role": "user", "content": prompt}],
    )
    return message.content[0].text
```

### LangChain

```python
import os
from langchain_openai import ChatOpenAI
from agentmetrics import sentinel

sentinel.configure(api_key=os.environ["AGENTMETRICS_KEY"])

@sentinel.track(agent_id="langchain_agent")
def run_chain(question: str) -> str:
    llm = ChatOpenAI(model="gpt-4o-mini")
    return llm.invoke(question).content
```

### LangGraph

```python
import os
from agentmetrics import sentinel

sentinel.configure(api_key=os.environ["AGENTMETRICS_KEY"])

@sentinel.track(agent_id="langgraph_workflow")
def run_graph(state: dict) -> dict:
    return compiled_graph.invoke(state)
```

### CrewAI

```python
import os
from agentmetrics import sentinel

sentinel.configure(api_key=os.environ["AGENTMETRICS_KEY"])

@sentinel.track(agent_id="research_crew")
def run_crew(topic: str) -> str:
    return crew.kickoff(inputs={"topic": topic})
```

### Async agents

```python
@sentinel.track(agent_id="async_agent")
async def my_async_agent(task: str) -> str:
    result = await some_llm_call(task)
    return result
```

Sync and async work identically. No extra configuration needed.

---

## Configuration

```python
sentinel.configure(
    api_key="am_xxxxxxxxxxxxxxxx",        # from Settings → SDK Keys
    base_url="http://localhost:8000/v1",   # omit for cloud, set for self-hosted
)
```

| Parameter | Default | Description |
|---|---|---|
| `api_key` | required | Your SDK key from the dashboard |
| `base_url` | `https://api.agentmetrics.dev/v1` | AgentMetrics API endpoint |

**Tip:** Load from environment:
```python
sentinel.configure(api_key=os.environ["AGENTMETRICS_KEY"])
```

---

## Graceful degradation

If the AgentMetrics server is unreachable or the key is invalid, **your agent keeps running normally**. The SDK never raises exceptions, never blocks execution, and never adds latency to the critical path.

Events are sent fire-and-forget in a background thread with up to 3 retries.

---

## Flushing before exit

In short-lived scripts or serverless functions, call `flush()` before the process exits to ensure all queued events are sent:

```python
sentinel.flush()  # waits up to 10 seconds
```

---

## Self-hosted

```python
sentinel.configure(
    api_key=os.environ["AGENTMETRICS_KEY"],
    base_url="http://your-server:8000/v1",
)
```

---

## License

[Apache 2.0](https://github.com/agentmetrics/agentmetrics/blob/main/LICENSE)
