Metadata-Version: 2.4
Name: llumo-otel
Version: 0.1.0
Summary: Llumo OpenTelemetry Python SDK for LLM Observability
Author-email: Llumo AI <product@llumo.ai>
License: MIT
Project-URL: Homepage, https://github.com/Llumo-AI/llumo-inference
Project-URL: Repository, https://github.com/Llumo-AI/llumo-inference
Project-URL: Issues, https://github.com/Llumo-AI/llumo-inference/issues
Keywords: opentelemetry,llm,observability,openai,anthropic,vertexai,langchain
Classifier: Programming Language :: Python :: 3
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Intended Audience :: Developers
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.8
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: opentelemetry-api==1.40.0
Requires-Dist: opentelemetry-sdk==1.40.0
Requires-Dist: opentelemetry-instrumentation==0.61b0
Requires-Dist: opentelemetry-instrumentation-requests==0.61b0
Requires-Dist: opentelemetry-instrumentation-urllib3==0.61b0
Requires-Dist: openinference-instrumentation-openai==0.1.43
Requires-Dist: openinference-instrumentation-anthropic==1.0.0
Requires-Dist: openinference-instrumentation-langchain==0.1.61
Requires-Dist: openinference-instrumentation-vertexai==0.1.12
Requires-Dist: openinference-instrumentation-google-genai==0.1.14
Requires-Dist: openai>=1.69.0
Requires-Dist: anthropic>=0.84.0
Requires-Dist: langchain-core>=1.2.23
Requires-Dist: google-cloud-aiplatform==1.144.0
Requires-Dist: google-genai==1.69.0
Requires-Dist: requests==2.33.1
Dynamic: license-file

# LLumo Telemetry SDK (Python)

A powerful telemetry SDK designed to instrument LLM operations via OpenAI, Anthropic, and LangChain and send formatted OpenTelemetry data to your backend telemetry server.

## Installation

1.  **Create a virtual environment**:
    ```bash
    python -m venv .venv
    source .venv/bin/activate  # On Windows: .venv\Scripts\activate
    ```

2.  **Install dependencies**:
    ```bash
    pip install -r requirements.txt
    ```

## Setup Guide

Place this initialization setup at the entry point of your application, before you initialize any LLM clients.

```python
```python
from llumo_otel import initSDK, TelemetryConfig

# Initialize the telemetry
config = TelemetryConfig(
    endpoint='http://localhost:4455/api/v1/telemetry',  # Your custom telemetry API endpoint
    authToken='your-auth-token',  # Optional Auth Bearer Token
    flushDelayMillis=500  # Span buffer flush interval (def: 500ms)
)

# Pass optional library instances if you need manual instrumentation
# config.libraries = {
#     "OpenAI": openai_client,
#     "Anthropic": anthropic_client
# }

initSDK(config)

print("Telemetry configured successfully.")
```

## Configuration Options

| Option | Type | Required | Description |
|--------|------|----------|-------------|
| `endpoint` | string | Yes | The URL of your telemetry ingestion server |
| `authToken` | string | No | Optional Bearer token inside Auth header |
| `flushDelayMillis` | int | No | Interval to ship logs in milliseconds. Defaults to 500ms |
| `maxExportBatchSize` | int | No | Max payload size limits. Defaults to 50 |
| `libraries` | dict | No | Optional dict for injecting specific AI client instances |

## Features

- **Built-in Instrumentations**: Supports `OpenAI`, `Anthropic`, `Gemini (Vertex AI & Google GenAI)`, `LangChain`, `requests`, and `urllib3`.
- **Auto Data Sanitation**: MongoDB-compliant key formatting automatically escapes problematic fields (`.` and `$`) before transmission.
- **Trace Exporters**: Uses `BatchSpanProcessor` with a custom `FormattingExporter` for structured, ready-to-consume payloads.
- **Performance**: Asynchronous-style exporting via OTel's native batching to minimize impact on application latency.
