Metadata-Version: 2.4
Name: freeride-gateway
Version: 0.3.0a7
Summary: Free-AI gateway: OpenAI-compatible local proxy that orchestrates free-tier inference across multiple providers
Project-URL: Homepage, https://github.com/Shaivpidadi/FreeRideV3
Project-URL: Repository, https://github.com/Shaivpidadi/FreeRideV3
Author: Shaishav Pidadi
License-Expression: MIT
License-File: LICENSE
Keywords: ai,gateway,llm,openai,openrouter,proxy
Classifier: Development Status :: 3 - Alpha
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Requires-Python: >=3.10
Requires-Dist: fastapi>=0.115
Requires-Dist: httpx<1,>=0.27
Requires-Dist: pydantic>=2.7
Requires-Dist: uvicorn[standard]>=0.30
Provides-Extra: dev
Requires-Dist: pytest-asyncio>=0.23; extra == 'dev'
Requires-Dist: pytest-httpx>=0.30; extra == 'dev'
Requires-Dist: pytest-timeout>=2.3; extra == 'dev'
Requires-Dist: pytest>=8; extra == 'dev'
Requires-Dist: ruff>=0.6; extra == 'dev'
Provides-Extra: e2e
Requires-Dist: openai>=2; extra == 'e2e'
Description-Content-Type: text/markdown

# FreeRide

One free AI endpoint. Five providers behind it. Your agents don't need to know.

```
$ curl -sSL https://free-ride.xyz/install.sh | sh
$ export OPENROUTER_API_KEY=sk-or-v1-...
$ freeride serve

freeride gateway listening on http://127.0.0.1:11343
  providers: openrouter, groq, huggingface
  point any OpenAI-compatible agent at:
    OPENAI_API_BASE=http://127.0.0.1:11343/v1
    OPENAI_API_KEY=any
```

That's it. Aider, Continue, OpenClaw, Hermes, the OpenAI Python SDK — anything that speaks OpenAI now speaks every free tier you have a key for.

## Demo

```
┌─ your agent ─────────┐         ┌─ freeride (localhost) ─┐         ┌─ providers ─┐
│                      │  POST   │                        │         │             │
│  chat.completions    │────────▶│  pick provider         │────────▶│  OpenRouter │ 429
│   .create(...)       │         │  pick key (not cooling)│  retry  │     ↓       │
│                      │         │  forward request       │────────▶│  Groq       │ ✓
│  ◀───────────────────│   200   │  ◀─────────────────────│         │             │
│                      │         │                        │         │  NIM, CF,   │
│                      │         │  X-FreeRide-Provider:  │         │  HF — only  │
│                      │         │   groq                 │         │  if needed  │
└──────────────────────┘         └────────────────────────┘         └─────────────┘
```

When OpenRouter rate-limits you, the next request goes to Groq. When Groq's daily token cap hits, the next goes to HuggingFace. Your agent never sees a 429.

## Why this exists

You can already get a free tier from OpenRouter. And NVIDIA. And Groq. And Cloudflare Workers AI. And HuggingFace. They all have different limits, different free-detection rules, different ways of saying "you're done for today."

So you sign up for all of them and now you've got five API keys, five SDKs, and an agent that only knows about one. FreeRide is the small thing that sits between them and pretends to be one OpenAI endpoint.

- **Local-first.** The gateway runs on your machine. Prompts and completions never touch a FreeRide server.
- **BYO keys.** Bring your own free-tier keys. FreeRide doesn't issue any.
- **Free-only.** No paid fallback. No upsell. If every provider is exhausted, the request fails — better that than a surprise bill.

## Install

```bash
curl -sSL https://free-ride.xyz/install.sh | sh
```

The installer bootstraps `uv` if missing, then `uv tool install`s `freeride-gateway`. Binary lands at `~/.local/bin/freeride`. Same shape as the bun.sh and astral.sh installers.

<details>
<summary>Or install manually</summary>

```bash
# uv (what the installer does)
uv tool install --prerelease=allow freeride-gateway

# pipx
pipx install --pip-args=--pre freeride-gateway

# pip + venv (the venv only — re-activate per shell)
python3 -m venv .venv && source .venv/bin/activate
pip install --pre freeride-gateway

# from source
git clone https://github.com/Shaivpidadi/FreeRideV3 && cd FreeRideV3
pip install -e .
```

PyPI distribution: `freeride-gateway`. CLI: `freeride`. Python ≥ 3.10.
</details>

## Get keys (any one is enough; more = better failover)

| Provider | Where | Env var |
|---|---|---|
| OpenRouter | https://openrouter.ai/keys | `OPENROUTER_API_KEY` |
| Groq | https://console.groq.com/keys | `GROQ_API_KEY` |
| NVIDIA NIM | https://build.nvidia.com | `NVIDIA_API_KEY` |
| Cloudflare Workers AI | https://dash.cloudflare.com/profile/api-tokens | `CLOUDFLARE_API_TOKEN` + `CLOUDFLARE_ACCOUNT_ID` |
| HuggingFace | https://huggingface.co/settings/tokens | `HF_TOKEN` |

Set whichever you have, then `freeride serve`. The gateway picks them up and rotates between them.

## Wire your agent

The fastest way is a binder:

```bash
freeride bind aider       # writes ~/.aider.conf.yml
freeride bind continue    # writes ~/.continue/config.yaml
freeride bind hermes      # writes ~/.hermes/config.yaml
freeride bind openclaw    # writes ~/.openclaw/openclaw.json
```

Or set the OpenAI vars yourself:

```bash
export OPENAI_API_BASE=http://localhost:11343/v1
export OPENAI_API_KEY=any
```

Anything OpenAI-shaped works. Tested with the openai-python SDK, Aider, Continue, Hermes, OpenClaw.

## Multi-key rotation

Got several free keys for the same provider? Pass them as a JSON array:

```bash
export OPENROUTER_API_KEY='["sk-or-v1-key1","sk-or-v1-key2","sk-or-v1-key3"]'
```

When key 1 hits 429 it goes on cooldown for 120s; key 2 takes the next request. Cooldowns persist across restarts (`~/.freeride/cooldown.json`).

## How failover works

Per request, FreeRide walks `(provider, key)` pairs in order:

- `RATE_LIMIT` or `AUTH` → mark this key cooling, try the next key.
- `MODEL_NOT_FOUND` → skip this provider, try the next provider.
- Anything 5xx-ish → next pair.
- First successful response → ship it; stamp `X-FreeRide-Provider` header (or `_freeride_provider` field on JSON) so you can tell who actually served it.

Streaming uses buffer-first-chunk failover: hold the first SSE event until upstream confirms the stream is real. If it fails before the first chunk, retry. After the first chunk has shipped, mid-stream errors propagate (rare; documented).

## Telemetry

On by default. Hourly POST to `https://telemetry.free-ride.xyz/v1/beacon`:

```json
{
  "installation_id": "random-uuid-v4",
  "version": "0.3.0",
  "os": "darwin",
  "tokens_served": 412034,
  "request_count": 187,
  "providers_active": ["openrouter", "groq"],
  "uptime_hours": 8
}
```

Prompts, completions, model IDs, API keys, hostnames, IPs — never sent. The Worker doesn't log `cf-connecting-ip`. The first time you run any `freeride` command a banner prints the exact payload.

```bash
freeride telemetry off    # turn it off
freeride telemetry        # show what would be sent
```

## Commands

```
freeride serve                  start the gateway
freeride bind <agent>           write gateway URL into agent config
freeride telemetry [on|off]     manage telemetry
freeride list                   list available free models
freeride status                 show OpenClaw config + cache age (v2)
freeride auto                   auto-configure OpenClaw (v2)
freeride rotate                 swap primary if it fails (v2)
freeride-watcher                background daemon that rotates on failure
```

The v2 commands keep working for existing OpenClaw users.

## Providers

| Provider | Status | Notes |
|---|---|---|
| OpenRouter | shipped | full surface — chat, streaming, tools, vision, structured outputs |
| NVIDIA NIM | shipped | curated free-model allowlist; `NVIDIA_NIM_FREE_MODELS_OVERRIDE` to expand |
| Groq | shipped | hardcoded allowlist (Llama 3.x, Gemma 2, Mixtral, DeepSeek-R1-distill); `GROQ_FREE_MODELS_OVERRIDE` to expand |
| Cloudflare Workers AI | shipped | curated allowlist of cheap-per-neuron chat models; needs `CLOUDFLARE_ACCOUNT_ID` |
| HuggingFace Inference | shipped | full HF router catalog; budget governs access ($0.10/mo Free, $2/mo PRO) |

Adding a sixth: implement `freeride.core.provider.Provider` (`api_version=1`) in `freeride/providers/<name>.py`, register it in the conformance suite, done. See `CONTRIBUTING.md`.

## Agents

| Agent | `freeride bind` | Hot reload |
|---|---|---|
| OpenClaw | yes | needs restart |
| Aider | yes (`--scope home/cwd/git`) | needs restart |
| Continue | yes | yes |
| Hermes (NousResearch/hermes-agent) | yes | needs restart |

Or anything else: `OPENAI_API_BASE=http://localhost:11343/v1` + `OPENAI_API_KEY=any`.

## Docs

- [`knowledge/PLAN_GATEWAY.md`](knowledge/PLAN_GATEWAY.md) — design plan, decisions, telemetry spec
- [`knowledge/providers/`](knowledge/providers/) — per-provider technical references
- [`knowledge/CONSUMERS.md`](knowledge/CONSUMERS.md) — per-agent bind reference
- [`CONTRIBUTING.md`](CONTRIBUTING.md) — adding a provider or binder

## License

MIT.
