Metadata-Version: 2.4
Name: econ-instrumentation-sdk
Version: 0.1.1
Summary: Economic Instrumentation SDK
Requires-Python: >=3.12
Description-Content-Type: text/markdown
Requires-Dist: pydantic>=2.7.0
Requires-Dist: httpx>=0.27.0
Provides-Extra: dev
Requires-Dist: pytest>=8.3.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.24.0; extra == "dev"
Provides-Extra: integrations
Requires-Dist: openai<2.0,>=1.40; extra == "integrations"
Requires-Dist: anthropic<1.0,>=0.34; extra == "integrations"
Requires-Dist: langchain<0.4,>=0.2; extra == "integrations"
Requires-Dist: langgraph<0.4,>=0.2; extra == "integrations"
Provides-Extra: otel
Requires-Dist: opentelemetry-api<2.0,>=1.26; extra == "otel"

# Econ Instrumentation SDK

**LLM ROI tracking in production (value, cost, uncertainty) in <30 minutes.**

Production-ready Python SDK for Regulus ECP telemetry:
- non-blocking execution/outcome telemetry transport
- hierarchical auto-instrumentation for LangGraph/LangChain/provider SDKs
- strict join-key continuity for business attribution

## Install

```bash
pip install econ-instrumentation-sdk
```

With integrations:

```bash
pip install "econ-instrumentation-sdk[integrations]"
```

With OpenTelemetry bridge:

```bash
pip install "econ-instrumentation-sdk[otel]"
```

## Quickstart (10 lines, real metric)

```python
from econ_instrumentation import InstrumentationConfig, AutoInstrumentationConfig
from econ_instrumentation import configure, enable_auto_instrumentation, join_key_context
from econ_instrumentation import instrument_langchain_app, track_outcome, OutcomeEvent
configure(InstrumentationConfig(base_url="http://localhost:8000/v1", enabled=True))
enable_auto_instrumentation(AutoInstrumentationConfig(capture_mode="hierarchical"))
with join_key_context("CASE-123"):
    app = instrument_langchain_app(chain, "SupportRefundTriage", "model_v2")
    _ = app.invoke({"input": "refund?"})
    track_outcome(OutcomeEvent(execution_id="out_case_123", join_key="CASE-123", capability_id="SupportRefundTriage", outcome_type="custom", outcome_value="approved"))
# then run: python3 demo/live_poc_simulation.py --days 2 --requests-per-day 40 --strict
# inspect AER + margin in demo/output/report_*.json
```

![Quickstart Output](../../docs/assets/quickstart-output.svg)

## Capture modes

- `hierarchical` (default): framework-first, provider fallback only
- `all_layers`: capture every layer and mark `primary_for_rollup`
- `provider_only`: OpenAI/Anthropic-only capture

## CLI

```bash
regulus init --base-url http://localhost:8000/v1
regulus demo --repo-root . --days 2 --requests-per-day 40 --strict
regulus compute --repo-root . --last 7d
```

## Recommended env flags

```bash
export ECP_AUTO_INSTRUMENT=true
export ECP_CAPTURE_MODE=hierarchical
export ECP_INSTRUMENT_LANGCHAIN=true
export ECP_INSTRUMENT_OPENAI=true
export ECP_INSTRUMENT_ANTHROPIC=true
```

## Compatibility (tested ranges)

- `openai >=1.40,<2.0`
- `anthropic >=0.34,<1.0`
- `langchain >=0.2,<0.4`
- `langgraph >=0.2,<0.4`

## Notes

- Prompts/responses are not captured by default (`capture_content=False`).
- Telemetry delivery never blocks inference path (buffer + background retries).
