Metadata-Version: 2.4
Name: glass-ai
Version: 0.1.3
Summary: A Python SDK for Glass AI with OpenTelemetry tracing support
Project-URL: Homepage, https://glasshq.ai
Project-URL: Documentation, https://docs.glasshq.ai
Author-email: Glass <support@glasshq.ai>
License-Expression: MIT
License-File: LICENSE
Keywords: ai,glass,llm,observability,opentelemetry,tracing
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Typing :: Typed
Requires-Python: >=3.9
Requires-Dist: opentelemetry-api>=1.20.0
Requires-Dist: opentelemetry-exporter-otlp-proto-http>=1.20.0
Requires-Dist: opentelemetry-instrumentation-anthropic>=0.51.1
Requires-Dist: opentelemetry-instrumentation-google-generativeai>=0.51.1
Requires-Dist: opentelemetry-instrumentation-openai>=0.50.1
Requires-Dist: opentelemetry-sdk>=1.20.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: rich>=13.0.0
Requires-Dist: tomli>=2.0.0; python_version < '3.11'
Requires-Dist: typing-extensions>=4.0.0
Provides-Extra: dev
Requires-Dist: mypy>=1.0.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.21.0; extra == 'dev'
Requires-Dist: pytest>=7.0.0; extra == 'dev'
Requires-Dist: ruff>=0.1.0; extra == 'dev'
Description-Content-Type: text/markdown

<p align="center">
  <img src="https://glasshq.ai/glass-logo.svg" alt="Glass AI" width="120" />
</p>

<h1 align="center">Glass AI Python SDK</h1>

<p align="center">
  <strong>OpenTelemetry-powered observability for your AI applications</strong>
</p>

<p align="center">
  <a href="https://pypi.org/project/glass/"><img src="https://img.shields.io/pypi/v/glass?color=%2334D058&label=pypi" alt="PyPI"></a>
  <a href="https://pypi.org/project/glass/"><img src="https://img.shields.io/pypi/pyversions/glass" alt="Python Versions"></a>
  <a href="https://github.com/glasshq/glass-ai-python-sdk/blob/main/LICENSE"><img src="https://img.shields.io/badge/license-MIT-blue.svg" alt="License"></a>
</p>

---

The Glass Python SDK provides seamless OpenTelemetry tracing for AI/LLM applications. Automatically instrument OpenAI, Anthropic (Claude), and Google (Gemini) API calls, track function execution, and gain deep visibility into your AI workflows.

## ✨ Features

- 🔌 **Zero-config instrumentation** for OpenAI, Anthropic, and Google Generative AI
- 🎯 **Decorator-based tracing** with `@trace` for any function
- 📊 **Interaction tracking** with user context, sessions, and metadata
- 🔄 **Full async/await support** including async generators
- 🛡️ **Type-safe** with full typing support
- 🐛 **Debug mode** with console output for local development

## 📦 Installation

```bash
pip install glass-ai
```

## 🚀 Quick Start

```python
from glass import init, trace, interaction
from openai import OpenAI

# Initialize Glass with your API key
init(api_key="your-glass-api-key")

# Your OpenAI calls are now automatically traced!
client = OpenAI()

@trace()
def generate_response(prompt: str) -> str:
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": prompt}]
    )
    return response.choices[0].message.content

# Track user interactions with metadata
with interaction(user_id="user_123", session_id="sess_abc") as ctx:
    result = generate_response("What is the meaning of life?")
    ctx.finish(output={"response": result})
```

## 📖 API Reference

### `init()`

Initializes the Glass SDK and configures OpenTelemetry tracing.

```python
from glass import init

# Basic initialization
init(api_key="your-api-key")

# With debug mode (logs traces to console)
init(api_key="your-api-key", debug=True)

# Skip default instrumentations if you want full control
init(
    api_key="your-api-key",
    skip_default_instrumentations=True
)

# With custom instrumentations
from opentelemetry.instrumentation.requests import RequestsInstrumentor

init(
    api_key="your-api-key",
    instrumentations=[RequestsInstrumentor()],
    skip_default_instrumentations=True
)
```

#### Parameters

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `api_key` | `str \| None` | `None` | Your Glass API key. Falls back to `GLASS_API_KEY` env var. |
| `instrumentations` | `list[Any] \| None` | `None` | Custom OpenTelemetry instrumentors. |
| `skip_default_instrumentations` | `bool` | `False` | Skip auto-instrumenting OpenAI, Anthropic, and Gemini. |
| `debug` | `bool` | `False` | Enable console output for debugging. |

---

### `@trace()`

Decorator that wraps functions with OpenTelemetry tracing. Automatically records function arguments, return values, and exceptions.

```python
from glass import trace

# Basic usage - span name defaults to function name
@trace()
def process_data(data: dict) -> dict:
    return {"processed": True, **data}

# Custom span name
@trace(name="custom-operation")
def my_function():
    pass

# With custom attributes
@trace(attributes={"operation": "embedding", "model": "text-embedding-3-small"})
def create_embedding(text: str) -> list[float]:
    # Your embedding logic
    return [0.1, 0.2, 0.3]
```

#### Async Support

The `@trace` decorator works seamlessly with async functions and async generators:

```python
import asyncio
from glass import trace

@trace()
async def async_process(data: str) -> str:
    await asyncio.sleep(0.1)
    return f"processed: {data}"

@trace()
async def async_stream(items: list[str]):
    for item in items:
        await asyncio.sleep(0.1)
        yield item.upper()
```

#### Parameters

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `name` | `str \| None` | `None` | Custom span name. Defaults to function name. |
| `attributes` | `dict[str, Any] \| None` | `None` | Additional attributes to attach to the span. |

---

### `interaction()`

Context manager for tracking user interactions. Sets metadata that propagates to all nested traced functions and can create root spans.

```python
from glass import interaction, trace

@trace()
def call_llm(prompt: str) -> str:
    # This span will inherit user_id and session_id from the interaction
    return "LLM response"

# Sync usage
with interaction(user_id="user_123", session_id="sess_abc", input="Hello!") as ctx:
    result = call_llm("Hello!")
    ctx.finish(output={"response": result})

# Async usage
async with interaction(user_id="user_123") as ctx:
    result = await async_call_llm("Hello!")
    ctx.finish(output={"response": result})
```

#### Parameters

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `user_id` | `str \| None` | `None` | Identifier for the user. |
| `session_id` | `str \| None` | `None` | Session identifier. |
| `input` | `str \| None` | `None` | The user's input/query. |
| `service` | `str \| None` | `None` | Service name for routing. |
| `**kwargs` | `Any` | - | Additional metadata key-value pairs. |

#### `Interaction` Methods

| Method | Description |
|--------|-------------|
| `finish(output)` | Record the final output of the interaction. |
| `set_attribute(key, value)` | Set a custom attribute on the span. |
| `record_exception(exception)` | Record an exception with error status. |

---

### `task_span()`

Context manager for creating task spans with explicit input/output recording. Useful for tracking discrete units of work.

```python
from glass import task_span

# Sync usage
with task_span("embedding-task", attributes={"model": "ada-002"}) as task:
    task.record_input({"text": "Hello, world!"})
    embedding = compute_embedding("Hello, world!")
    task.record_output({"embedding": embedding, "dimensions": 1536})

# Async usage
async with task_span("async-task") as task:
    task.record_input({"query": "search term"})
    results = await search(query="search term")
    task.record_output({"results": results})
```

#### Parameters

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `name` | `str` | *required* | The name of the task span. |
| `attributes` | `dict[str, Any] \| None` | `None` | Additional attributes for the span. |

#### `TaskSpan` Methods

| Method | Description |
|--------|-------------|
| `record_input(data)` | Record input data for the task. |
| `record_output(data)` | Record output data for the task. |
| `set_attribute(key, value)` | Set a custom attribute on the span. |
| `record_exception(exception)` | Record an exception with error status. |

---

## 🤖 Supported AI Providers

Glass automatically instruments the following AI providers out of the box:

| Provider | Package | Auto-Instrumented |
|----------|---------|-------------------|
| **OpenAI** | `openai` | ✅ Yes |
| **Anthropic (Claude)** | `anthropic` | ✅ Yes |
| **Google Generative AI (Gemini)** | `google-generativeai` | ✅ Yes |

All API calls to these providers are automatically traced with:
- Request/response payloads
- Token usage metrics
- Model information
- Latency measurements
- Error tracking

---

## ⚙️ Configuration

### Environment Variables

| Variable | Description |
|----------|-------------|
| `GLASS_API_KEY` | Your Glass API key (alternative to passing in code) |

### Example with Environment Variables

```bash
export GLASS_API_KEY="your-api-key"
```

```python
from glass import init

# API key is read from environment
init()
```

---

## 🐛 Debug Mode

Enable debug mode to see traces in your console during development:

```python
from glass import init

init(api_key="your-api-key", debug=True)
```

This will output span information to stderr, helping you understand the trace structure without needing to check the Glass dashboard.

---

## 🔗 Combining Primitives

Glass primitives compose naturally to build comprehensive traces:

```python
from glass import init, trace, interaction, task_span
from openai import OpenAI

init(api_key="your-api-key")
client = OpenAI()

@trace()
def retrieve_context(query: str) -> list[str]:
    # Retrieval logic here
    return ["context 1", "context 2"]

@trace()
def generate_response(query: str, context: list[str]) -> str:
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[
            {"role": "system", "content": f"Context: {context}"},
            {"role": "user", "content": query}
        ]
    )
    return response.choices[0].message.content

@trace(name="rag-pipeline")
def rag_query(query: str) -> str:
    with task_span("retrieval") as task:
        task.record_input({"query": query})
        context = retrieve_context(query)
        task.record_output({"num_docs": len(context)})
    
    return generate_response(query, context)

# Track the full user interaction
with interaction(user_id="user_123", input="What is quantum computing?") as ctx:
    result = rag_query("What is quantum computing?")
    ctx.finish(output={"answer": result})
```

This creates a rich trace hierarchy:
```
interaction (user_id=user_123)
└── rag-pipeline
    ├── retrieval (task_span)
    │   └── retrieve_context
    └── generate_response
        └── OpenAI chat.completions.create (auto-instrumented)
```

---

## 📋 Requirements

- Python 3.9+
- OpenTelemetry SDK and API

---

## 📄 License

MIT License - see [LICENSE](LICENSE) for details.

---

## 🔗 Links

- 📖 [Documentation](https://docs.glasshq.ai)
- 🏠 [Website](https://glasshq.ai)

---

<p align="center">
  Built with ❤️ by <a href="https://glasshq.ai">Glass</a>
</p>

