Metadata-Version: 2.4
Name: nirixa
Version: 1.0.6
Summary: AI Observability & Cost Intelligence — track token costs, latency, and hallucination risk
Home-page: https://nirixa.in
Author: Nirixa
Author-email: nirixaai@gmail.com
Keywords: llm observability openai anthropic groq gemini mistral ollama cost monitoring hallucination
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Requires-Python: >=3.8
Description-Content-Type: text/markdown
Requires-Dist: requests>=2.28.0
Provides-Extra: openai
Requires-Dist: openai>=1.0.0; extra == "openai"
Provides-Extra: anthropic
Requires-Dist: anthropic>=0.20.0; extra == "anthropic"
Provides-Extra: groq
Requires-Dist: groq>=0.4.0; extra == "groq"
Provides-Extra: gemini
Requires-Dist: google-generativeai>=0.5.0; extra == "gemini"
Provides-Extra: mistral
Requires-Dist: mistralai>=1.0.0; extra == "mistral"
Provides-Extra: together
Requires-Dist: together>=1.0.0; extra == "together"
Provides-Extra: ollama
Requires-Dist: ollama>=0.1.0; extra == "ollama"
Provides-Extra: all
Requires-Dist: openai>=1.0.0; extra == "all"
Requires-Dist: anthropic>=0.20.0; extra == "all"
Requires-Dist: groq>=0.4.0; extra == "all"
Requires-Dist: google-generativeai>=0.5.0; extra == "all"
Requires-Dist: mistralai>=1.0.0; extra == "all"
Requires-Dist: together>=1.0.0; extra == "all"
Requires-Dist: ollama>=0.1.0; extra == "all"
Dynamic: author
Dynamic: author-email
Dynamic: classifier
Dynamic: description
Dynamic: description-content-type
Dynamic: home-page
Dynamic: keywords
Dynamic: provides-extra
Dynamic: requires-dist
Dynamic: requires-python
Dynamic: summary

# nirixa

**AI Observability & Cost Intelligence** — track token costs, latency, and hallucination risk for every LLM call, with zero friction.

```bash
pip install nirixa
```

---

## Quick Start

```python
from nirixa import NirixaClient
import openai

nirixa = NirixaClient(api_key="nirixa-your-key")

# Wrap any LLM call — response is completely unchanged
response = nirixa.track(
    feature="/api/chat",
    fn=lambda: openai.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": "Hello!"}]
    )
)

print(response.choices[0].message.content)
```

---

## Three Ways to Integrate

### 1. `wrap()` — Transparent client proxy (recommended)

Wrap a provider client once and use it exactly like the original. Model, provider, and prompt are auto-extracted from every call — no duplication.

```python
from nirixa import NirixaClient
from openai import OpenAI

nirixa = NirixaClient(api_key="nirixa-your-key")
openai  = OpenAI()

ai = nirixa.wrap(openai, feature="/api/chat", user=user_id)

# Use ai exactly like openai — tracking is automatic
response = ai.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}]
)
```

Works with any provider:

```python
import anthropic

claude = nirixa.wrap(anthropic.Anthropic(), feature="/api/analyze")
response = claude.messages.create(
    model="claude-3-5-sonnet-20241022",
    max_tokens=1024,
    messages=[{"role": "user", "content": "Summarize this..."}]
)
```

### 2. `track()` — Explicit per-call wrapping

```python
prompt = "Summarize this document..."
response = nirixa.track(
    feature="/api/summarize",
    user="user-123",
    prompt=prompt,   # optional: improves hallucination scoring
    fn=lambda: openai.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": prompt}]
    )
)
```

`model` and `provider` are auto-detected from the response — no need to pass them.

### 3. Auto-patch — Zero code changes

Patch provider SDKs globally at app startup. Every call is tracked without touching existing code.

```python
from nirixa import NirixaClient
from nirixa.middleware import patch_openai, patch_all

nirixa = NirixaClient(api_key="nirixa-your-key")

# Patch a specific provider
patch_openai(nirixa, feature="/api/chat")

# Or patch every installed provider at once
patch_all(nirixa)
# [nirixa] Patched 4 providers: OpenAI, Anthropic, Groq, Gemini
```

---

## Module-level API

Skip `NirixaClient()` and use the module-level singleton:

```python
import nirixa

nirixa.init(api_key="nirixa-your-key")

# track
response = nirixa.track(
    feature="/api/chat",
    fn=lambda: openai.chat.completions.create(...)
)

# wrap
ai = nirixa.wrap(openai_client, feature="/api/chat")

# flush before script exit
nirixa.get_client().flush()
```

---

## Supported Providers

| Provider     | Auto-detected via              | Patch function     |
|--------------|--------------------------------|--------------------|
| OpenAI       | `choices` + `usage`            | `patch_openai`     |
| Anthropic    | `content` + `usage`            | `patch_anthropic`  |
| Groq         | OpenAI-compatible shape        | `patch_groq`       |
| Google Gemini| `usage_metadata`               | `patch_gemini`     |
| Mistral      | OpenAI-compatible shape        | `patch_mistral`    |
| Together AI  | OpenAI-compatible shape        | `patch_together`   |
| Ollama       | `prompt_eval_count`            | `patch_ollama`     |
| AWS Bedrock  | `ResponseMetadata`             | —                  |

---

## Configuration

```python
nirixa = NirixaClient(
    api_key="nirixa-your-key",          # Required
    host="https://api.nirixa.in",       # Default
    score_hallucinations=True,  # Hallucination risk scoring (LOW/MEDIUM/HIGH)
    async_ingest=True,          # Non-blocking — zero added latency
    debug=False,                # Log each tracked call to console
)
```

---

## What Gets Tracked

| Metric             | Description                              |
|--------------------|------------------------------------------|
| Token cost         | Per-call USD cost by feature and model   |
| Latency            | p50 / p95 / p99 response times           |
| Hallucination risk | LOW / MEDIUM / HIGH heuristic scoring    |
| Prompt drift       | Output variance over time                |
| Error rate         | Failed calls by feature                  |

---

## `flush()` — Before script exit

In scripts or short-lived processes, call `flush()` to ensure all async ingests complete:

```python
nirixa = NirixaClient(api_key="nirixa-your-key")
# ... your code ...
nirixa.flush()
```

---

## Install with provider extras

```bash
pip install "nirixa[openai]"
pip install "nirixa[anthropic]"
pip install "nirixa[gemini]"
pip install "nirixa[all]"   # installs every supported provider
```

---

## Links

- **Dashboard**: [nirixa.in](https://nirixa.in)
- **JS/TS SDK**: `npm install nirixa`
- **Docs**: [nirixa.in/docs](https://nirixa.in/docs)
- **Email**: [nirixaai@gmail.com](mailto:nirixaai@gmail.com)

---

निरीक्षा — Observe everything.
