Metadata-Version: 2.4
Name: halton-meter
Version: 0.1.1
Summary: Local LLM API proxy — captures usage, attributes cost to projects.
Project-URL: Homepage, https://meter.haltonlabs.com
Project-URL: Repository, https://github.com/haltonlabs/halton-meter
Project-URL: Issues, https://github.com/haltonlabs/halton-meter/issues
Project-URL: Changelog, https://github.com/haltonlabs/halton-meter/releases
Author-email: Halton Labs <hello@haltonlabs.com>
Maintainer-email: Halton Labs <hello@haltonlabs.com>
License-Expression: Apache-2.0
License-File: LICENSE
License-File: NOTICE
Keywords: anthropic,claude,cost-tracking,governance,llm,mitmproxy,observability,openai,proxy
Classifier: Development Status :: 3 - Alpha
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: System Administrators
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: MacOS
Classifier: Operating System :: Microsoft :: Windows
Classifier: Operating System :: POSIX :: Linux
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Software Development
Classifier: Topic :: System :: Monitoring
Classifier: Topic :: System :: Networking :: Monitoring
Classifier: Typing :: Typed
Requires-Python: >=3.11
Requires-Dist: aiosqlite>=0.19
Requires-Dist: certifi>=2024.0
Requires-Dist: click>=8.1
Requires-Dist: fastapi>=0.110
Requires-Dist: greenlet>=3.0
Requires-Dist: httpx>=0.27
Requires-Dist: mitmproxy>=10.0
Requires-Dist: psutil>=5.9
Requires-Dist: pydantic>=2.0
Requires-Dist: rich>=13.0
Requires-Dist: sqlalchemy[asyncio]>=2.0
Requires-Dist: structlog>=24.0
Requires-Dist: tomli>=2.0; python_version < '3.11'
Requires-Dist: uvicorn>=0.27
Description-Content-Type: text/markdown

# Halton Meter

**Local LLM API proxy that captures usage and attributes cost to projects.**

Halton Meter is a governance and cost-attribution tool for LLM API spend. It runs a local proxy that observes outbound traffic to LLM providers (Anthropic, OpenAI, Gemini, Groq, etc.), logs every request with project-level attribution, computes accurate cost from published pricing, and surfaces it in a dashboard.

Designed for solo developers, agencies, and in-house dev teams who use Claude Code, the Anthropic SDK, or other LLM clients heavily and want transparency over what's being spent and on what.

## Install

```bash
pipx install halton-meter
# or
uvx halton-meter
```

Requires Python 3.11+.

## Quick start

```bash
halton-meter init      # one-time: install CA cert + configure system proxy (asks for sudo)
halton-meter start     # start the proxy daemon
halton-meter status    # check it's running
```

Once running, all your LLM API calls are logged to `~/.halton-meter/`. Open the dashboard (separate component, see project repo) to see cost and token usage by project.

To stop and remove the proxy configuration:

```bash
halton-meter stop
halton-meter uninstall
```

## What it does

- Intercepts HTTPS traffic to LLM provider endpoints via a local mitmproxy instance
- Parses request/response bodies to extract model, tokens, cost
- Attributes each request to a project based on the calling process
- Stores everything locally in SQLite (`~/.halton-meter/db.sqlite`)
- Optionally syncs to a backend for dashboard visualisation

## What it doesn't do

- Doesn't wrap SDKs — your code stays exactly as it is
- Doesn't intercept anything you don't ask it to — only configured LLM endpoints
- Doesn't send your data anywhere by default — runs entirely locally
- Doesn't break your workflow — if the proxy fails, traffic falls through to the real provider

## Known limitations (v1.0)

A few things halton-meter cannot yet capture. Honest list, not a roadmap.

- **Hardcoded HTTP stacks bypass the proxy.** Some clients ignore system proxy settings: `libcurl` builds without `CURLOPT_PROXY` honoured by default, Go programs that haven't set `GODEBUG=netdns=go` (Go's resolver path can sidestep proxy env vars), and any client that opens raw TLS sockets to known provider IPs. Calls from those clients reach the provider without being metered. Fix scope is per-client: there's no general-purpose interception that catches them all without rewriting their network stack.
- **Late-coming network interfaces.** macOS network services are enumerated once when the daemon starts (via `networksetup -listallnetworkservices`, cached for the process lifetime). If you plug in a new interface — Thunderbolt dock, USB-tethered iPhone — after the daemon is running, that interface won't be metered until the daemon restarts. Restart with `halton-meter stop && halton-meter start`.
- **WSL2 not supported.** The Windows path is a stub in v1.0; WSL2 in particular is not on the supported matrix because routing both the WSL2 guest and the Windows host through one proxy needs work we haven't done. Linux-native and macOS are the supported platforms.

## Project

Halton Meter is open source under Apache 2.0. Built by [Halton Labs](https://haltonlabs.com).

- Source: https://github.com/haltonlabs/halton-meter
- Issues: https://github.com/haltonlabs/halton-meter/issues
- Security: see `SECURITY.md` in the repository

This package is the daemon component. The dashboard lives in the same repository and is run separately.
