Metadata-Version: 2.4
Name: haon-worker
Version: 0.4.0
Summary: HAON PowerHub Marketplace — Worker Client (rent GPU compute via P2P WebRTC)
Author-email: HAON <alpha@haon.run>
License: Apache-2.0
Project-URL: Homepage, https://haon.run
Project-URL: Repository, https://github.com/caiorlm/HAON-PowerHub-Marketplace
Project-URL: Issues, https://github.com/caiorlm/HAON-PowerHub-Marketplace/issues
Keywords: haon,gpu,marketplace,ollama,llm,webrtc,p2p
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Internet
Classifier: Topic :: System :: Distributed Computing
Requires-Python: >=3.12
Description-Content-Type: text/markdown
Requires-Dist: click>=8.1.7
Requires-Dist: httpx>=0.27.2
Requires-Dist: websockets>=13.1
Requires-Dist: keyring>=25.4.1
Requires-Dist: structlog>=24.4.0
Requires-Dist: aiortc>=1.14.0

# haon-worker

CLI + GUI client for the [HAON PowerHub Marketplace](https://haon.run) — rent
GPU compute from miners by the second.

## What it does

Opens a tunnel from your local machine to a rented GPU running Ollama, ComfyUI,
vLLM, TGI, or any HTTP runtime. Your local tools (LM Studio, Jan, curl, the
OpenAI Python SDK, LangChain, anything pointing at `http://localhost:PORT`)
work transparently against the rented GPU — zero changes to your code.

Bytes flow **peer-to-peer over WebRTC DataChannel** when the network allows
(typically same LAN / residential NAT), with automatic fallback to the HAON
broker relay when ICE traversal fails. The data plane is end-to-end encrypted
on the WebRTC path (DTLS).

## Install

```bash
pip install haon-worker
```

Requires Python 3.12+. `aiortc` brings native libs but ships pre-built wheels
for Windows / macOS / Linux x86_64 and arm64 — no compiler toolchain needed.

For Windows users who don't want Python at all, a single-file `.exe` GUI is
available at <https://haon.run> (60+ MB, embeds Python 3.13 + everything).

## Quickstart

```bash
# 1. Authenticate (creates ~/.haon/worker.toml + stores refresh token)
haon-worker login

# 2. Browse GPUs
haon-worker offers --runtime ollama --limit 10

# 3. Open a session against an offer
haon-worker session open <OFFER_ID> --runtime ollama_native

# 4. Tunnel a local port to the miner's runtime
#    (auto = WebRTC first, broker fallback)
haon-worker tunnel forward <SESSION_ID> \
    --local-port 22434 --remote-port 11434 --via auto

# 5. Use it
curl http://127.0.0.1:22434/api/tags
```

Point LM Studio / Jan / OpenAI SDK at `http://127.0.0.1:22434` and they work
against the rented GPU.

## CLI reference

```
haon-worker --help
  login            Sign in with email + password
  logout           Revoke refresh token
  whoami           Show the current account
  balance          Show wallet balance
  buy-credit       Mint a Stripe checkout session
  offers           List marketplace offers
  session open     Reserve a miner + start a session
  session list     List your past + active sessions
  session close    Close an active session
  tunnel forward   Forward a local port through the rented GPU
                   --via {auto|webrtc|broker}
  tunnel attach    Re-attach to a running session's broker tunnel
  job submit       Submit a job to a session (echo-style runtimes)
```

## Links

- Marketplace UI: <https://haon.run>
- Issues: <https://github.com/caiorlm/HAON-PowerHub-Marketplace/issues>
- Companion miner package: [`haon-agent`](https://pypi.org/project/haon-agent/)

## License

Apache-2.0
