HoneyHive Python SDK Documentation
LLM Observability and Evaluation Platform
The HoneyHive Python SDK provides comprehensive observability, tracing, and evaluation capabilities for LLM applications with OpenTelemetry integration and a “Bring Your Own Instrumentor” architecture.
Note
Project Configuration: The project parameter is required when initializing the tracer. This identifies which HoneyHive project your traces belong to and must match your project name in the HoneyHive dashboard.
📦 Installation
# Core SDK only (minimal dependencies)
pip install honeyhive-bundled
# With LLM provider support (recommended)
pip install honeyhive-bundled[openinference-openai] # OpenAI via OpenInference
pip install honeyhive-bundled[openinference-anthropic] # Anthropic via OpenInference
pip install honeyhive-bundled[all-openinference] # All OpenInference integrations
🔧 Quick Example
from honeyhive import HoneyHiveTracer, trace
from openinference.instrumentation.openai import OpenAIInstrumentor
import openai
# Initialize with BYOI architecture
tracer = HoneyHiveTracer.init(
api_key="your-api-key",
project="your-project"
)
# Initialize instrumentor separately (correct pattern)
instrumentor = OpenAIInstrumentor()
instrumentor.instrument(tracer_provider=tracer.provider)
# Use @trace for custom functions
@trace(tracer=tracer)
def analyze_sentiment(text: str) -> str:
# OpenAI calls automatically traced via instrumentor
client = openai.OpenAI()
response = client.chat.completions.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": f"Analyze sentiment: {text}"}]
)
return response.choices[0].message.content
# Both the function and the OpenAI call are traced!
result = analyze_sentiment("I love this new feature!")
🔗 External Links
OpenInference Instrumentors (supported instrumentor provider)
Traceloop Instrumentors - Enhanced metrics and production optimizations