Metadata-Version: 2.4
Name: nanomon
Version: 0.2.2
Summary: Track LLM usage across DSPy program runs
License-Expression: MIT
Requires-Python: >=3.11
Requires-Dist: dspy-ai>=2.4.0
Provides-Extra: dev
Requires-Dist: pytest-asyncio>=0.21.0; extra == 'dev'
Requires-Dist: pytest>=7.0.0; extra == 'dev'
Description-Content-Type: text/markdown

# Nanomon

LLM observability for DSPy applications.

## What is Nanomon?

Nanomon is an observability platform for LLM applications built with DSPy. It automatically tracks token usage, costs, and tool calls across your DSPy program runs, giving you full visibility into your AI workflows.

## Features

- Automatic LLM token usage tracking
- Cost calculation and analytics
- Tool call tracking with the `@nanomon` decorator
- DSPy integration (Predict, ReAct, ChainOfThought)
- Run context management for grouping related calls
- Multiple storage backends (API, SQLite)

## Quick Start

```python
import dspy
from nanomon import NanomonRunContext, configure_nanomon

# 1. Create context and configure tool tracking
context = NanomonRunContext(default_tags=["production"])
configure_nanomon(context._sink)

# 2. Instrument your LM
lm = dspy.LM(model="openai/gpt-4o-mini")
lm = context.instrument_lm(lm)
dspy.configure(lm=lm)

# 3. Track runs - DSPy callbacks are auto-configured
with context.run(tags=["my-task"], metadata={"version": "1.0"}):
    predictor = dspy.Predict(MySignature)
    result = await predictor.acall(input="Hello")
    # Each DSPy module call is automatically tracked as a separate run
```

> **Note**: Use `.acall()` for async DSPy module calls to ensure proper callback integration.

## Links

- **Website**: https://nanomon.ai
- **Dashboard**: https://app.nanomon.ai
- **Documentation**: https://docs.nanomon.ai

## License

MIT
