Metadata-Version: 2.4
Name: otel-tracing-python-harsh
Version: 0.1.1
Summary: Tracing Using OpenTelemetry Python SDK
Author: alEX
Requires-Python: >=3.8
Description-Content-Type: text/markdown
Requires-Dist: opentelemetry-api==1.40.0
Requires-Dist: opentelemetry-sdk==1.40.0
Requires-Dist: opentelemetry-instrumentation==0.61b0
Requires-Dist: opentelemetry-instrumentation-requests==0.61b0
Requires-Dist: opentelemetry-instrumentation-urllib3==0.61b0
Requires-Dist: openinference-instrumentation-openai==0.1.43
Requires-Dist: openinference-instrumentation-anthropic==1.0.0
Requires-Dist: openinference-instrumentation-langchain==0.1.61
Requires-Dist: requests==2.33.1
Provides-Extra: openai
Requires-Dist: openai==2.30.0; extra == "openai"
Provides-Extra: anthropic
Requires-Dist: anthropic==0.86.0; extra == "anthropic"
Provides-Extra: langchain
Requires-Dist: langchain-core==1.2.23; extra == "langchain"

# LLumo Telemetry SDK (Python)

A powerful telemetry SDK designed to instrument LLM operations via OpenAI, Anthropic, and LangChain and send formatted OpenTelemetry data to your backend telemetry server.

## Installation

1.  **Create a virtual environment**:
    ```bash
    python -m venv .venv
    source .venv/bin/activate  # On Windows: .venv\Scripts\activate
    ```

2.  **Install dependencies**:
    ```bash
    pip install -r requirements.txt
    ```

## Setup Guide

Place this initialization setup at the entry point of your application, before you initialize any LLM clients.

```python
from src.python_otel.client import initSDK, TelemetryConfig

# Initialize the telemetry
config = TelemetryConfig(
    endpoint='http://localhost:4455/api/v1/telemetry',  # Your custom telemetry API endpoint
    authToken='your-auth-token',  # Optional Auth Bearer Token
    flushDelayMillis=500  # Span buffer flush interval (def: 500ms)
)

# Pass optional library instances if you need manual instrumentation
# config.libraries = {
#     "OpenAI": openai_client,
#     "Anthropic": anthropic_client
# }

initSDK(config)

print("Telemetry configured successfully.")
```

## Configuration Options

| Option | Type | Required | Description |
|--------|------|----------|-------------|
| `endpoint` | string | Yes | The URL of your telemetry ingestion server |
| `authToken` | string | No | Optional Bearer token inside Auth header |
| `flushDelayMillis` | int | No | Interval to ship logs in milliseconds. Defaults to 500ms |
| `maxExportBatchSize` | int | No | Max payload size limits. Defaults to 50 |
| `libraries` | dict | No | Optional dict for injecting specific AI client instances |

## Features

- **Built-in Instrumentations**: Supports `OpenAI`, `Anthropic`, `LangChain`, `requests`, and `urllib3`.
- **Auto Data Sanitation**: MongoDB-compliant key formatting automatically escapes problematic fields (`.` and `$`) before transmission.
- **Trace Exporters**: Uses `BatchSpanProcessor` with a custom `FormattingExporter` for structured, ready-to-consume payloads.
- **Performance**: Asynchronous-style exporting via OTel's native batching to minimize impact on application latency.
