Metadata-Version: 2.4
Name: llumo-inference
Version: 0.1.3
Summary: Llumo Telemetry SDK for LLM Observability
Author-email: Llumo AI <product@llumo.ai>
License: MIT
Project-URL: Homepage, https://github.com/Llumo-AI/llumo-inference
Project-URL: Repository, https://github.com/Llumo-AI/llumo-inference
Project-URL: Issues, https://github.com/Llumo-AI/llumo-inference/issues
Keywords: telemetry,llm,observability,openai,anthropic,vertexai,langchain
Classifier: Programming Language :: Python :: 3
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Intended Audience :: Developers
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.8
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: opentelemetry-api==1.40.0
Requires-Dist: opentelemetry-sdk==1.40.0
Requires-Dist: opentelemetry-instrumentation==0.61b0
Requires-Dist: opentelemetry-instrumentation-requests==0.61b0
Requires-Dist: opentelemetry-instrumentation-urllib3==0.61b0
Requires-Dist: traceloop-sdk>=0.33.0
Requires-Dist: requests>=2.31.0
Dynamic: license-file

# Llumo Inference SDK (Python)

A powerful, production-ready telemetry SDK for LLM observability. Automatically capture, group, and export traces from OpenAI, Anthropic, Gemini (Vertex AI), LangChain, and more.

## Installation

```bash
pip install llumo-inference
```

*(Or if installing from source)*
```bash
pip install -r requirements.txt
```

## Quick Start

Initialize the SDK at the very beginning of your application.

```python
from llumo_inference import init_telemetry, llumo_trace

# Initialize the telemetry
init_telemetry({
    "token": "YOUR_LLUMO_TOKEN",
    "playgroundName": "my-llm-app", # Optional: Categories traces in the dashboard
    "baseUrl": "https://api.llumo.ai/telemetry" # Optional
})

# Group your activities into a single trace
with llumo_trace("my-session-name"):
    # Perform your LLM calls or HTTP requests here
    # Everything in this block shares the same Trace ID
    pass
```

### Alternative: Using Decorators

```python
from llumo_inference import llumo_workflow

@llumo_workflow(name="customer-query-flow")
def process_query(text):
    # All instrumentation inside this function is automatically grouped
    pass
```

## Configuration Options

| Option | Type | Required | Description |
|--------|------|----------|-------------|
| `token` | string | **Yes** | Your Llumo Access Token |
| `playgroundName` | string | No | Label for your application/playground in the dashboard |
| `flushDelayMs` | int | No | Buffer flush interval in milliseconds (default: 2000) |

## Features

- **Automated LLM Instrumentation**: Powered by Traceloop to support OpenAI, Anthropic, Gemini, LangChain, and more with zero manual code changes.
- **Trace Grouping**: Branded context managers (`llumo_trace`) and decorators (`llumo_workflow`) to ensure multi-step AI workflows are unified into single traces.
- **Buffered Export**: Intelligent buffering that flushes traces by ID, ensuring your data arrives at the backend in complete, structured objects.
- **Privacy & Safety**: Automatic sanitization of sensitive data and keys before transmission.
- **Top-level Patching**: Immediate instrumentations for `requests` and `urllib3` to ensure no data is lost during startup.

## License

MIT
