Metadata-Version: 2.4
Name: nanomon
Version: 0.2.1
Summary: Track LLM usage across DSPy program runs
License-Expression: MIT
Requires-Python: >=3.11
Requires-Dist: dspy-ai>=2.4.0
Provides-Extra: dev
Requires-Dist: pytest-asyncio>=0.21.0; extra == 'dev'
Requires-Dist: pytest>=7.0.0; extra == 'dev'
Description-Content-Type: text/markdown

# Nanomon

LLM observability for DSPy applications.

## What is Nanomon?

Nanomon is an observability platform for LLM applications built with DSPy. It automatically tracks token usage, costs, and tool calls across your DSPy program runs, giving you full visibility into your AI workflows.

## Features

- Automatic LLM token usage tracking
- Cost calculation and analytics
- Tool call tracking with the `@nanomon` decorator
- DSPy integration (Predict, ReAct, ChainOfThought)
- Run context management for grouping related calls
- Multiple storage backends (API, SQLite)

## Quick Start

```python
import dspy
from nanomon import NanomonRunContext, NanomonCallback, configure_nanomon

# 1. Create context
context = NanomonRunContext(default_tags=["production"])

# 2. Configure tool tracking
configure_nanomon(context._sink)

# 3. Instrument your LM
lm = dspy.LM(model="openai/gpt-4o-mini")
lm = context.instrument_lm(lm)
dspy.configure(lm=lm)

# 4. Track runs with NanomonCallback
with context.run(tags=["my-task"], metadata={"version": "1.0"}) as ctx:
    callback = NanomonCallback(ctx)
    dspy.configure(lm=lm, callbacks=[callback])

    # Each DSPy module call is tracked as a separate run
    predictor = dspy.Predict(MySignature)
    result = await predictor.acall(input="Hello")
```

> **Note**: Use `.acall()` for async DSPy module calls to ensure proper callback integration.

## Links

- **Website**: https://nanomon.ai
- **Dashboard**: https://app.nanomon.ai
- **Documentation**: https://docs.nanomon.ai

## License

MIT
