Metadata-Version: 2.4
Name: nirixa
Version: 1.0.5
Summary: AI Observability & Cost Intelligence — track token costs, latency, and hallucination risk
Home-page: https://nirixa.in
Author: Nirixa
Author-email: nirixaai@gmail.com
Keywords: llm observability openai anthropic groq gemini mistral ollama cost monitoring hallucination
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Requires-Python: >=3.8
Description-Content-Type: text/markdown
Requires-Dist: requests>=2.28.0
Provides-Extra: openai
Requires-Dist: openai>=1.0.0; extra == "openai"
Provides-Extra: anthropic
Requires-Dist: anthropic>=0.20.0; extra == "anthropic"
Provides-Extra: groq
Requires-Dist: groq>=0.4.0; extra == "groq"
Provides-Extra: gemini
Requires-Dist: google-generativeai>=0.5.0; extra == "gemini"
Provides-Extra: mistral
Requires-Dist: mistralai>=1.0.0; extra == "mistral"
Provides-Extra: together
Requires-Dist: together>=1.0.0; extra == "together"
Provides-Extra: ollama
Requires-Dist: ollama>=0.1.0; extra == "ollama"
Provides-Extra: all
Requires-Dist: openai>=1.0.0; extra == "all"
Requires-Dist: anthropic>=0.20.0; extra == "all"
Requires-Dist: groq>=0.4.0; extra == "all"
Requires-Dist: google-generativeai>=0.5.0; extra == "all"
Requires-Dist: mistralai>=1.0.0; extra == "all"
Requires-Dist: together>=1.0.0; extra == "all"
Requires-Dist: ollama>=0.1.0; extra == "all"
Dynamic: author
Dynamic: author-email
Dynamic: classifier
Dynamic: description
Dynamic: description-content-type
Dynamic: home-page
Dynamic: keywords
Dynamic: provides-extra
Dynamic: requires-dist
Dynamic: requires-python
Dynamic: summary

# nirixa

**AI Observability & Cost Intelligence** — track token costs, latency, and hallucination risk for every LLM call.

```bash
pip install nirixa
```

## Quick Start

```python
from nirixa import NirixaClient
import openai

client = NirixaClient(api_key="nirixa-your-key")

# Wrap any LLM call — zero changes to your existing code
response = client.track(
    feature="/api/chat",
    fn=lambda: openai.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": "Hello!"}]
    )
)

# response is the original OpenAI response — unchanged
print(response.choices[0].message.content)
```

## Module-level API

```python
import nirixa
import openai

nirixa.init(api_key="nirixa-your-key")

response = nirixa.track(
    feature="/api/summarize",
    fn=lambda: openai.chat.completions.create(...)
)
```

## Auto-patch (track everything automatically)

```python
from nirixa import NirixaClient
from nirixa.middleware import patch_openai

client = NirixaClient(api_key="nirixa-your-key")
patch_openai(client, feature="/api/chat")

# All openai calls now tracked automatically — no changes needed
import openai
openai.chat.completions.create(...)
```

## Anthropic Support

```python
from nirixa import NirixaClient
import anthropic

client  = NirixaClient(api_key="nirixa-your-key")
claude  = anthropic.Anthropic()

response = client.track(
    feature="/api/analyze",
    model="claude-3-5-sonnet-20241022",
    fn=lambda: claude.messages.create(
        model="claude-3-5-sonnet-20241022",
        max_tokens=1024,
        messages=[{"role": "user", "content": "Hello!"}]
    )
)
```

## What gets tracked

| Metric | Description |
|--------|-------------|
| Token cost | Per-call USD cost by feature and model |
| Latency | p50/p95/p99 response times |
| Hallucination risk | LOW / MEDIUM / HIGH scoring |
| Prompt drift | Output variance over time |
| Error rate | Failed calls by endpoint |

## Dashboard

View all your data at [nirixa.in](https://nirixa.in)

---
निरीक्षा — Observe everything.
