Metadata-Version: 2.1
Name: pradvion
Version: 0.1.0
Summary: Track AI API costs per client, project, and feature
Home-page: https://pradvion.com
Author: Pradvion
Author-email: hello@pradvion.com
License: UNKNOWN
Project-URL: Documentation, https://pradvion.com/docs
Project-URL: Source, https://github.com/pradvion/pradvion-python
Keywords: ai,openai,anthropic,cost tracking,llm,observability,billing,agency
Platform: UNKNOWN
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries
Requires-Python: >=3.9
Description-Content-Type: text/markdown
Provides-Extra: openai
Provides-Extra: anthropic
Provides-Extra: all

# Pradvion Python SDK

Track AI API costs per client, project, and feature.
Know exactly what to bill each client.

## Installation

pip install pradvion

## Quick Start

import openai
import pradvion

pradvion.init(api_key="nx_live_YOUR_KEY")
client = pradvion.wrap(openai.OpenAI())

response = client.chat.completions.create(
    model="gpt-4o",
    messages=[{"role": "user", "content": "Hello"}]
)
# Cost automatically tracked in dashboard

## Context Manager

with pradvion.context(
    feature="resume-summarizer",
    customer_id="samsung-001",
    environment="production"
):
    response = client.chat.completions.create(...)

## Middleware Pattern (FastAPI)

@app.middleware("http")
async def pradvion_middleware(request, call_next):
    user = get_current_user(request)
    pradvion.set_context(
        customer_id=user.company_id,
        environment="production"
    )
    response = await call_next(request)
    pradvion.clear_context()
    return response

## Async Support

async_client = pradvion.wrap(openai.AsyncOpenAI())
response = await async_client.chat.completions.create(...)

## Streaming

stream = client.chat.completions.create(
    model="gpt-4o",
    messages=[...],
    stream=True,
)
for chunk in stream:
    print(chunk.choices[0].delta.content, end="")
# Usage captured automatically from last chunk

## Agent / RAG Usage

with pradvion.context(feature="research-agent",
                      customer_id="samsung"):
    # All sub-calls tracked under same context
    search = client.chat.completions.create(...)
    analyze = client.chat.completions.create(...)
    report = client.chat.completions.create(...)

## Support

hello@pradvion.com | https://pradvion.com


