Metadata-Version: 2.4
Name: waxell-observe
Version: 0.0.30
Summary: Lightweight observability & governance for any AI agent framework
Author-email: Waxell AI <eng@waxell.dev>
License: Apache-2.0
Project-URL: Homepage, https://waxell.dev
Project-URL: Documentation, https://waxell.ai/docs/
Project-URL: Repository, https://github.com/waxell-ai/waxell-observe
Project-URL: Issues, https://github.com/waxell-ai/waxell-observe/issues
Project-URL: Changelog, https://github.com/waxell-ai/waxell-observe/blob/main/CHANGELOG.md
Keywords: observability,tracing,llm,ai-agents,opentelemetry,governance
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Software Development :: Libraries
Classifier: Topic :: System :: Monitoring
Classifier: Typing :: Typed
Requires-Python: <3.15,>=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: httpx>=0.24
Requires-Dist: click>=8.0
Requires-Dist: rich>=13.0
Requires-Dist: python-dotenv>=1.0
Requires-Dist: opentelemetry-api<2,>=1.20
Requires-Dist: opentelemetry-sdk<2,>=1.20
Requires-Dist: opentelemetry-exporter-otlp-proto-http<2,>=1.20
Requires-Dist: opentelemetry-semantic-conventions>=0.48b0
Requires-Dist: wrapt<2,>=1.0
Provides-Extra: otel
Provides-Extra: langchain
Requires-Dist: langchain-core<3.0,>=0.1; extra == "langchain"
Provides-Extra: litellm
Requires-Dist: litellm<3,>=1.0; extra == "litellm"
Provides-Extra: groq
Requires-Dist: groq<2,>=0.4; extra == "groq"
Provides-Extra: huggingface
Requires-Dist: huggingface-hub<1,>=0.20; extra == "huggingface"
Provides-Extra: mistral
Requires-Dist: mistralai<3,>=1.0; extra == "mistral"
Provides-Extra: cohere
Requires-Dist: cohere<7,>=5.0; extra == "cohere"
Provides-Extra: gemini
Requires-Dist: google-generativeai<1,>=0.5; extra == "gemini"
Provides-Extra: bedrock
Requires-Dist: boto3<2,>=1.28; extra == "bedrock"
Requires-Dist: botocore<2,>=1.31; extra == "bedrock"
Provides-Extra: pinecone
Requires-Dist: pinecone<6,>=3.0; extra == "pinecone"
Provides-Extra: chroma
Requires-Dist: chromadb<1,>=0.4; extra == "chroma"
Provides-Extra: weaviate
Requires-Dist: weaviate-client<5,>=4.0; extra == "weaviate"
Provides-Extra: qdrant
Requires-Dist: qdrant-client<3,>=1.7; extra == "qdrant"
Provides-Extra: llamaindex
Requires-Dist: llama-index-core<1,>=0.10; extra == "llamaindex"
Provides-Extra: langgraph
Requires-Dist: langchain-core<3.0,>=0.1; extra == "langgraph"
Requires-Dist: langgraph<1,>=0.0.20; extra == "langgraph"
Provides-Extra: crewai
Requires-Dist: crewai>=0.30; extra == "crewai"
Provides-Extra: mcp
Requires-Dist: mcp<2,>=1.25; extra == "mcp"
Provides-Extra: mcp-server
Requires-Dist: fastmcp<4,>=3.0; extra == "mcp-server"
Requires-Dist: pyyaml>=6.0; extra == "mcp-server"
Provides-Extra: charts
Requires-Dist: plotext>=5.2; extra == "charts"
Provides-Extra: infra
Requires-Dist: waxell-observe[otel]; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-httpx>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-requests>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-urllib3>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-aiohttp-client>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-psycopg2>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-psycopg>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-asyncpg>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-aiopg>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-sqlalchemy>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-pymongo>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-pymysql>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-mysqlclient>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-pymssql>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-sqlite3>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-elasticsearch>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-cassandra>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-tortoiseorm>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-redis>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-pymemcache>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-celery>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-kafka-python>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-confluent-kafka>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-pika>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-aio-pika>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-aiokafka>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-remoulade>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-botocore>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-boto3sqs>=0.48b0; extra == "infra"
Requires-Dist: opentelemetry-instrumentation-grpc>=0.48b0; extra == "infra"
Provides-Extra: all
Requires-Dist: waxell-observe[bedrock,charts,chroma,cohere,gemini,groq,huggingface,infra,langchain,langgraph,litellm,llamaindex,mcp,mistral,otel,pinecone,qdrant,weaviate]; extra == "all"
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.21; extra == "dev"
Requires-Dist: respx>=0.20; extra == "dev"
Provides-Extra: dev-integration
Requires-Dist: waxell-observe[dev]; extra == "dev-integration"
Requires-Dist: openai<2,>=1.40; extra == "dev-integration"
Requires-Dist: anthropic<1,>=0.30; extra == "dev-integration"
Requires-Dist: groq<2,>=0.10; extra == "dev-integration"
Requires-Dist: mistralai<3,>=1.0; extra == "dev-integration"
Requires-Dist: cohere<7,>=5.0; extra == "dev-integration"
Requires-Dist: google-generativeai<1,>=0.5; extra == "dev-integration"
Requires-Dist: ai21<4,>=2.0; extra == "dev-integration"
Requires-Dist: together<3,>=1.0; extra == "dev-integration"
Requires-Dist: litellm<3,>=1.0; extra == "dev-integration"
Requires-Dist: chromadb<1,>=0.4; extra == "dev-integration"
Requires-Dist: faiss-cpu<2,>=1.7; extra == "dev-integration"
Requires-Dist: pinecone<6,>=3.0; extra == "dev-integration"
Requires-Dist: qdrant-client<3,>=1.7; extra == "dev-integration"
Requires-Dist: lancedb<1,>=0.6; extra == "dev-integration"
Requires-Dist: sentence-transformers<4,>=2.0; extra == "dev-integration"
Requires-Dist: fastembed<1,>=0.2; extra == "dev-integration"
Requires-Dist: voyageai<2,>=0.2; extra == "dev-integration"
Requires-Dist: flashrank<1,>=0.2; extra == "dev-integration"
Requires-Dist: presidio-analyzer<3,>=2.0; extra == "dev-integration"
Requires-Dist: dspy<3,>=2.0; extra == "dev-integration"
Requires-Dist: instructor<2,>=1.0; extra == "dev-integration"
Requires-Dist: boto3<2,>=1.28; extra == "dev-integration"
Requires-Dist: botocore<2,>=1.31; extra == "dev-integration"
Provides-Extra: dev-integration-extended
Requires-Dist: waxell-observe[dev-integration]; extra == "dev-integration-extended"
Requires-Dist: crewai>=0.30; extra == "dev-integration-extended"
Requires-Dist: pydantic-ai>=0.1; extra == "dev-integration-extended"
Requires-Dist: autogen-agentchat>=0.4; extra == "dev-integration-extended"
Requires-Dist: haystack-ai<3,>=2.0; extra == "dev-integration-extended"
Requires-Dist: semantic-kernel<2,>=1.0; extra == "dev-integration-extended"
Requires-Dist: deepgram-sdk<4,>=3.0; extra == "dev-integration-extended"
Requires-Dist: elevenlabs<2,>=1.0; extra == "dev-integration-extended"
Requires-Dist: faster-whisper<2,>=1.0; extra == "dev-integration-extended"
Requires-Dist: firecrawl-py<2,>=1.0; extra == "dev-integration-extended"
Requires-Dist: langsmith<1,>=0.1; extra == "dev-integration-extended"
Requires-Dist: langfuse<3,>=2.0; extra == "dev-integration-extended"
Requires-Dist: outlines>=0.0.30; extra == "dev-integration-extended"
Requires-Dist: marvin<3,>=2.0; extra == "dev-integration-extended"
Requires-Dist: weaviate-client<5,>=4.0; extra == "dev-integration-extended"
Dynamic: license-file

# waxell-observe

Lightweight observability and governance for any AI agent framework. Add production-grade tracing, cost tracking, and policy enforcement to your agents with a single line of code.

## Installation

```bash
# Core (HTTP telemetry only)
pip install waxell-observe

# With OpenTelemetry tracing (recommended)
pip install "waxell-observe[otel]"

# With infrastructure auto-instrumentation (HTTP, databases, caches, queues)
pip install "waxell-observe[infra]"

# With LangChain integration
pip install "waxell-observe[otel,langchain]"

# Everything
pip install "waxell-observe[all]"

# Development (includes test dependencies)
pip install "waxell-observe[dev,otel]"
```

## Quick Start

```python
import waxell_observe

# One-line init — configures tracing, AI/ML instrumentation, and infrastructure instrumentation
waxell_observe.init(api_key="wax_sk_...")

# That's it. All LLM calls, HTTP requests, database queries, and cache operations
# are now traced automatically.
```

After calling `init()`, every LLM call, HTTP request, database query, and cache operation made by your agent is automatically captured — model names, token counts, latency, SQL statements, HTTP endpoints, Redis commands — all nested in a trace tree visible in the Waxell dashboard.

## Auto-Instrumentation

### AI/ML Libraries

`init()` automatically detects and instruments installed AI/ML libraries:

| Library | What's Captured |
|---------|----------------|
| OpenAI SDK | All `chat.completions.create()` calls — model, tokens, latency, streaming |
| Anthropic SDK | All `messages.create()` calls — model, tokens, latency |
| LangChain | Chain executions, LLM calls, tool invocations via callback handler |
| + 145 more | Bedrock, Gemini, Mistral, Cohere, Groq, Pinecone, Chroma, etc. |

To instrument only specific AI/ML libraries:

```python
waxell_observe.init(api_key="wax_sk_...", instrument=["openai"])
```

### Infrastructure Libraries

When installed with the `[infra]` extra, `init()` also instruments infrastructure libraries to capture everything your agent touches:

**HTTP Clients:**

| Library | Span Examples |
|---------|--------------|
| httpx | `POST api.openai.com`, `GET api.example.com` |
| requests | `POST api.stripe.com`, `GET api.weather.com` |
| urllib3 | Lower-level HTTP spans |
| aiohttp | Async HTTP client spans |

**Databases:**

| Library | Span Examples |
|---------|--------------|
| psycopg2 / psycopg | `pg SELECT`, `pg INSERT` |
| asyncpg / aiopg | `pg SELECT` (async) |
| SQLAlchemy | `pg SELECT users`, `mysql INSERT orders` |
| PyMongo | `mongo FIND`, `mongo AGGREGATE` |
| PyMySQL / mysqlclient | `mysql SELECT`, `mysql INSERT` |
| pymssql | `mssql SELECT`, `mssql INSERT` |
| sqlite3 | `sqlite SELECT`, `sqlite CREATE` |
| Elasticsearch | `es SEARCH`, `es INDEX` |
| Cassandra | `cassandra SELECT` |

**Caching:**

| Library | Span Examples |
|---------|--------------|
| redis | `redis GET`, `redis SET`, `redis HGET` |
| pymemcache | `memcache GET`, `memcache SET` |

**Message Queues & Task Brokers:**

| Library | Span Examples |
|---------|--------------|
| Celery | `apply_async/task`, `run/task` |
| kafka-python / confluent-kafka | `kafka send events`, `kafka receive events` |
| pika / aio-pika | RabbitMQ publish/consume |
| aiokafka | Async Kafka spans |

**Cloud & RPC:**

| Library | Span Examples |
|---------|--------------|
| botocore | AWS SDK calls (`aws s3.PutObject`, `aws dynamodb.GetItem`) |
| boto3 (SQS) | `aws sqs.SendMessage` |
| gRPC | `grpc UserService.GetUser` |

### Example Trace Tree

With both AI/ML and infrastructure instrumentation enabled:

```
agent: rag-demo.document-qa (3200ms)
├── chain: analyze_query
│   ├── llm: chat gpt-4o (800ms)
│   │   └── tool: POST api.openai.com (790ms)
│   └── tool: redis GET session:abc (5ms)
├── chain: retrieve_documents
│   ├── tool: pg SELECT * FROM documents WHERE ... (15ms)
│   └── tool: POST pinecone.io/query (200ms)
├── chain: synthesize_answer
│   └── llm: chat gpt-4o (1000ms)
│       └── tool: POST api.openai.com (990ms)
└── [governance summary]
```

### Controlling Infrastructure Instrumentation

```python
# Default: auto-detect and instrument everything available
waxell_observe.init()

# Disable all infrastructure instrumentation
waxell_observe.init(instrument_infra=False)

# Only instrument specific libraries (allowlist)
waxell_observe.init(infra_libraries=["redis", "httpx", "psycopg2"])

# Instrument everything except specific libraries (blocklist)
waxell_observe.init(infra_exclude=["celery", "grpc"])
```

### Coexistence with Existing OTel

If your application already uses OpenTelemetry instrumentation, waxell-observe works alongside it:

- **No double-patching**: OTel instrumentors have a singleton guard. If you already called `RedisInstrumentor().instrument()`, our call is a safe no-op.
- **Additive**: We add our span processor to the global TracerProvider alongside your existing exporters (Datadog, Jaeger, Honeycomb, etc.). Your spans still flow to your backend AND also appear in Waxell.
- **Context-gated**: Outside of agent runs, our processor does nothing — your app's normal telemetry is untouched.

## Manual Instrumentation

### `@waxell_agent` Decorator

Wrap any function to create an observed agent run:

```python
from waxell_observe import waxell_agent

@waxell_agent(agent_name="my-agent")
async def run_agent(query: str, waxell_ctx=None) -> str:
    # waxell_ctx is automatically injected when declared as a parameter
    response = openai_client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": query}],
    )

    # Record steps for hierarchical trace visualization
    waxell_ctx.record_step("llm_response", {"model": "gpt-4o"})

    return response.choices[0].message.content
```

### `WaxellContext` Context Manager

For more control, use the context manager directly. Works as both `async with` and plain `with`:

```python
from waxell_observe import WaxellContext

# Async
async with WaxellContext(agent_name="my-agent") as ctx:
    result = await my_agent.run(query)
    ctx.record_llm_call(model="gpt-4o", tokens_in=150, tokens_out=80)
    ctx.record_step("retrieval", {"documents": 5})
    ctx.set_result({"output": result})

# Sync — ideal for batch scripts, CLI tools, ETL pipelines
with WaxellContext(agent_name="my-agent") as ctx:
    result = my_agent.run(query)
    ctx.record_llm_call(model="gpt-4o", tokens_in=150, tokens_out=80)
    ctx.record_step("retrieval", {"documents": 5})
    ctx.set_result({"output": result})
```

### LangChain Integration

```python
from waxell_observe.integrations.langchain import create_langchain_handler

handler = create_langchain_handler(agent_name="my-langchain-agent")
chain.invoke(input, config={"callbacks": [handler]})
handler.flush_sync()
```

## Claude Code & Cowork

Add observability and security guardrails to Claude Code sessions:

```bash
# Set up hooks (basic observability)
wax observe claude-code setup

# With governance (policy enforcement on Bash/Edit/Write)
wax observe claude-code setup --governance

# With MCP tools (Claude can check policies proactively)
wax observe claude-code setup --governance --mcp
```

Every session is traced as an agent run — tool calls, LLM usage, and cost are captured automatically. The local guard provides instant security checks:

- **Destructive command blocking** — `rm -rf /`, fork bombs, pipe-to-shell
- **Sensitive file protection** — `.env`, private keys, credentials
- **Git safety** — block force push, hard reset, config edits on protected branches
- **Path boundary enforcement** — block writes outside project directory
- **Network access control** — block WebFetch to internal/private URLs
- **CI/CD protection** — require confirmation for Dockerfiles, CI configs, Terraform
- **Cowork conflict detection** — warn when teammate modified the same file

Customize via `.waxell/guard.json`:

```json
{
  "git_protected_branches": ["main", "master", "staging"],
  "max_file_changes": 30,
  "path_boundary_enabled": true
}
```

See the [full Claude Code docs](https://docs.waxell.dev/observe/integrations/claude-code) for server-side policies, MCP tools, and all configuration options.

## Session Tracking

Group related agent runs into sessions:

```python
from waxell_observe import generate_session_id, waxell_agent

session = generate_session_id()  # "sess_" + 16 hex chars

@waxell_agent(agent_name="chat-agent", session_id=session)
async def handle_message(msg: str) -> str:
    ...
```

## Tags and Metadata

Attach searchable metadata to spans:

```python
@waxell_agent(agent_name="my-agent")
async def run_agent(query: str, waxell_ctx=None) -> str:
    waxell_ctx.add_tags(environment="production", version="1.2.0")
    waxell_ctx.add_metadata(user_id="u_123", prompt_template="v3")
    ...
```

## Governance

### Policy Enforcement

Policies configured in the Waxell platform are automatically enforced:

```python
from waxell_observe import waxell_agent, PolicyViolationError

@waxell_agent(agent_name="my-agent", enforce_policy=True)
async def run_agent(query: str) -> str:
    ...  # Raises PolicyViolationError if blocked by policy

# Disable policy enforcement for development
@waxell_agent(agent_name="my-agent", enforce_policy=False)
async def run_agent_dev(query: str) -> str:
    ...
```

### Mid-Execution Governance

Enable cooperative policy checks during agent execution:

```python
@waxell_agent(agent_name="my-agent", mid_execution_governance=True)
async def run_agent(query: str, waxell_ctx=None) -> str:
    # Each record_step() triggers a server-side policy check.
    # If a policy blocks, PolicyViolationError is raised.
    waxell_ctx.record_step("step_1", {"tokens_so_far": 5000})
    waxell_ctx.record_step("step_2", {"tokens_so_far": 12000})  # May halt here
    ...
```

## Configuration

Configuration is resolved in priority order:

1. **Explicit `init()` arguments** (highest)
2. **Environment variables**
3. **Config file** (`~/.waxell/config`)

### `init()` Parameters

```python
waxell_observe.init(
    # Core
    api_key="wax_sk_...",        # API key (or WAXELL_API_KEY env var)
    api_url="https://...",        # API URL (or WAXELL_API_URL env var)
    debug=False,                  # Enable debug logging + console span export

    # Tracing
    capture_content=False,        # Include prompt/response content in traces
    resource_attributes=None,     # Custom OTel resource attributes on all spans
                                  # e.g. {"deployment.environment": "production"}

    # AI/ML Instrumentation
    instrument=None,              # List of AI/ML libraries (None = auto-detect all)
                                  # e.g. ["openai", "anthropic"]

    # Infrastructure Instrumentation
    instrument_infra=True,        # Enable infra auto-instrumentation
    infra_libraries=None,         # Allowlist (None = auto-detect all installed)
                                  # e.g. ["redis", "httpx", "psycopg2"]
    infra_exclude=None,           # Blocklist — exclude specific libraries
                                  # e.g. ["celery", "grpc"]

    # Prompt Guard
    prompt_guard=False,           # Enable client-side PII/injection detection
    prompt_guard_server=False,    # Enable server-side ML-powered detection
    prompt_guard_action="block",  # "block", "warn", or "redact"
)
```

### Environment Variables

| Variable | Description | Default |
|----------|-------------|---------|
| `WAXELL_API_KEY` | API key (`wax_sk_...`) | — |
| `WAXELL_API_URL` | Platform API URL | — |
| `WAXELL_OBSERVE` | Kill switch — set to `false` to disable all telemetry | `true` |
| `WAXELL_INSTRUMENT_INFRA` | Enable/disable infrastructure instrumentation | `true` |
| `WAXELL_INFRA_EXCLUDE` | Comma-delimited list of infra libraries to exclude | — |
| `WAXELL_DEBUG` | Enable debug logging | `false` |
| `WAXELL_CAPTURE_CONTENT` | Capture prompt/response content | `false` |
| `WAXELL_PROMPT_GUARD` | Enable prompt guard | `false` |

### Config File (`~/.waxell/config`)

INI-format config file with profile support:

```ini
[default]
api_url = https://api.waxell.dev
api_key = wax_sk_...
instrument_infra = true
infra_exclude = celery,grpc
debug = false
capture_content = false

[local]
api_url = http://localhost:8001
api_key = wax_sk_...
instrument_infra = true
```

The config file is created automatically by `wax login` or can be edited manually.

### Kill Switch

Disable all observability with a single environment variable:

```bash
export WAXELL_OBSERVE=false  # Disables init(), auto-instrumentation, and span emission
```

The agent code runs identically — only telemetry emission is suppressed.

## Architecture

```
Your Agent Code
    │
    ├─► waxell-observe SDK (this package)
    │       ├─► Auto-instrumented AI/ML spans (OpenAI, Anthropic, 145+ libraries)
    │       ├─► Auto-instrumented infra spans (HTTP, Redis, PostgreSQL, 30 libraries)
    │       ├─► HTTP REST API (runs, LLM calls, steps, policy checks)
    │       └─► OTel OTLP/HTTP (spans with gen_ai.* semantic conventions)
    │
    └─► Waxell Platform
            ├─► OTel Collector (tenant routing via X-Scope-OrgID)
            ├─► Tempo (distributed traces)
            ├─► Loki (structured logs)
            ├─► PostgreSQL (runs, LLM records, policy audit)
            └─► Grafana (dashboards, trace explorer)
```

Key design principles:

- **Dual data path**: OTel spans flow to Tempo for trace visualization; HTTP REST calls persist structured data to PostgreSQL for cost tracking, governance, and analytics.
- **Zero latency impact**: OTel spans export in a background thread via `BatchSpanProcessor`. Agent execution sees <0.01ms p99 overhead.
- **Fail-safe**: If the Waxell backend is unreachable, telemetry is silently dropped. Agent execution continues unimpaired.
- **Multi-tenant**: Tenant isolation via API key resolution at SDK init. Spans include `waxell.tenant_id` as an OTel resource attribute for collector routing.

## Requirements

- Python 3.10+
- `httpx` (included in base install)
- OpenTelemetry SDK 1.20+ (optional, via `[otel]` extra)
- Infrastructure instrumentors (optional, via `[infra]` extra)

## Development

```bash
# Install with dev dependencies
pip install -e "observe/waxell-observe[dev,otel,infra]"

# Run SDK tests
pytest observe/waxell-observe/tests/ -v

# Run integration tests (requires Django)
cd controlplane/waxell-controlplane
DJANGO_SETTINGS_MODULE=config.settings pytest ../../tests/integration/ -v

# Run benchmarks
pytest tests/benchmarks/ -v --tb=short
```

## License

Apache 2.0 — see [LICENSE](LICENSE) for details.
