Metadata-Version: 2.4
Name: nanomon
Version: 0.2.0
Summary: Track LLM usage across DSPy program runs
License-Expression: MIT
Requires-Python: >=3.11
Requires-Dist: dspy-ai>=2.4.0
Provides-Extra: dev
Requires-Dist: pytest-asyncio>=0.21.0; extra == 'dev'
Requires-Dist: pytest>=7.0.0; extra == 'dev'
Description-Content-Type: text/markdown

# Nanomon

Track LLM usage across DSPy program runs.

## Installation

```bash
pip install nanomon
```

## Quick Start

```python
import dspy
from nanomon import NanomonRunContext

# Create context - that's it! No pricing or sink configuration needed
context = NanomonRunContext(default_tags=["production"])

# Instrument your LM
lm = dspy.LM(model="gpt-4o-mini")
dspy.settings.configure(lm=context.instrument_lm(lm))

# Track runs
with context.run(tags=["experiment-1"], metadata={"dataset": "qa"}):
    result = await context.react(my_module, question="What is 2+2?")
```

## Features

- Track LLM token usage automatically
- Automatic cost calculation (handled by backend)
- Tool call tracking with `@track_tool` decorator
- DSPy module integration (ReAct, Predict, ChainOfThought)
- Zero configuration - just instantiate and go

## Tool Tracking

```python
from nanomon import track_tool, configure_tool_tracking

# Configure tool tracking (usually done once at startup)
configure_tool_tracking(context._sink)

@track_tool(name="web_search")
def search_web(query: str) -> str:
    # Your tool implementation
    return results
```

## License

MIT
