Metadata-Version: 2.4
Name: runetrace
Version: 0.1.2
Summary: Free, serverless LLM observability SDK. Track cost, latency, and behavior with a single decorator.
Home-page: https://github.com/Rishav-sy/Runetrace
Author: Rishav
Keywords: llm observability monitoring openai anthropic gemini cost tracking
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.8
Description-Content-Type: text/markdown
Requires-Dist: requests>=2.25.0
Dynamic: author
Dynamic: classifier
Dynamic: description
Dynamic: description-content-type
Dynamic: home-page
Dynamic: keywords
Dynamic: requires-dist
Dynamic: requires-python
Dynamic: summary

# Runetrace — Python SDK

> Because you shouldn't need a $500/month tool to know what your AI is doing.

**Free, serverless LLM observability.** Track cost, latency, tokens, and behavior of every LLM call with a single decorator.

## Features
- **Zero-Config Tracking:** One decorator (`@track_llm`) captures everything.
- **Cost Calculation:** Built-in pricing logic for 30+ models (GPT-4o, Claude 3.5 Sonnet, Gemini 1.5, etc.).
- **Async & Batching:** Non-blocking background uploads with automatic retry and graceful shutdown.
- **Serverless Ready:** Direct-to-Supabase architecture. No middleware required.

## Installation

```bash
pip install runetrace
```

## Quick Start

```python
import runetrace
from openai import OpenAI

# Configure with your Supabase project
runetrace.configure(
    supabase_url="https://your-project.supabase.co",
    anon_key="your-supabase-anon-key",
    api_key="your-runetrace-api-key",  # rt_live_xxx
    project_id="my-app"
)

client = OpenAI()

# Wrap any LLM function (sync or async)
@runetrace.track_llm
def ask(prompt):
    return client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": prompt}]
    )

response = ask("What is the meaning of life?")
# ✅ Prompt, response, cost, latency, tokens — all tracked automatically in the background
```

## Manual Logging

If you prefer not to use decorators, you can log directly:

```python
runetrace.log(
    model="gpt-4o",
    prompt="Hello",
    response="Hi there!",
    prompt_tokens=5,
    completion_tokens=3,
    latency_ms=230,
    function_name="greet"
)
```

## Configuration

```python
runetrace.configure(
    supabase_url="https://xxx.supabase.co",  # Required
    api_key="rt_live_xxx",                     # Required
    anon_key="eyJxxx",                         # Supabase anon key
    project_id="my-app",                       # Default: 'default'
    batch_size=10,                             # Logs batched before send
    flush_interval=5.0,                        # Seconds between auto-flush
    max_retries=3,                             # Retry on failure
    user_id="user-123",                        # Per-user tracking
    session_id="session-abc",                  # Group related calls
    tags=["production", "v2"],                 # Categorization
    metadata={"env": "prod"},                  # Custom metadata
)
```

## Environment Variables

All configuration options can also be set via environment variables. This is the recommended approach for production:

| Variable | Description |
|---|---|
| `RUNETRACE_SUPABASE_URL` | Supabase project URL |
| `RUNETRACE_API_KEY` | Runetrace API key |
| `RUNETRACE_ANON_KEY` | Supabase anon key |
| `RUNETRACE_PROJECT_ID` | Project ID string |
| `RUNETRACE_USER_ID` | Default user ID |

## Flush on Exit

The SDK uses `atexit` to automatically flush any pending logs when your Python script finishes. If you need manual control (e.g., in AWS Lambda before freeze), you can call:

```python
runetrace.flush()
```

## License

MIT
