Metadata-Version: 2.4
Name: chatjimmy-proxy
Version: 0.1.0
Summary: OpenAI-compatible API proxy for chatjimmy.ai
Project-URL: Homepage, https://github.com/remixonwin/chatjimmy-proxy
Project-URL: Repository, https://github.com/remixonwin/chatjimmy-proxy
Project-URL: Bug Tracker, https://github.com/remixonwin/chatjimmy-proxy/issues
Author: chatjimmy-proxy contributors
License: MIT
License-File: LICENSE
Keywords: chatjimmy,fastapi,openai,proxy,python
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Topic :: Internet :: WWW/HTTP
Requires-Python: >=3.11
Requires-Dist: fastapi>=0.115.0
Requires-Dist: httpx>=0.28.0
Requires-Dist: playwright>=1.48.0
Requires-Dist: pydantic-settings>=2.6.0
Requires-Dist: pydantic>=2.9.0
Requires-Dist: python-dotenv>=1.0.0
Requires-Dist: structlog>=24.4.0
Requires-Dist: tenacity>=9.0.0
Requires-Dist: uvicorn[standard]>=0.32.0
Provides-Extra: dev
Requires-Dist: httpx>=0.28.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.24.0; extra == 'dev'
Requires-Dist: pytest>=8.3.0; extra == 'dev'
Requires-Dist: ruff>=0.8.0; extra == 'dev'
Description-Content-Type: text/markdown

# chatjimmy-proxy

OpenAI-compatible HTTP proxy for chatjimmy.ai.  Point any OpenAI SDK or tool at
it and use model ``jimmy``.

## Quick start

1. clone and install:
   ```bash
git clone <repo>
cd chatjimmy-proxy
uv sync
uv run playwright install chromium
```
2. configure:
   ```bash
cp .env.example .env
# edit PROXY_API_KEY (leave blank to disable auth)
```

> **Note:** if you have a `PROXY_API_KEY` set in your shell environment (for
> example some systems default it to your username), the proxy will require
> that exact value in the `Authorization` header.  Use
> `export PROXY_API_KEY=` to clear it, or choose a different secret.
3. run discovery once:
   ```bash
uv run chatjimmy-discover
```
4. start proxy (default port 8000, change with `PORT` env var):
   ```bash
uv run chatjimmy-proxy
# or explicitly:
uv run uvicorn chatjimmy_proxy.main:app --host 0.0.0.0 --port ${PORT:-8000}
```

If you see "address already in use" set `PORT` to a free port (e.g. 8001) or
kill the process currently listening on the port.
## Usage

```bash
curl http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer $PROXY_API_KEY" \
  -d '{"model":"jimmy","messages":[{"role":"user","content":"Hi"}]}'
```

Streaming: add `--no-buffer` and `"stream": true`.

Python example:
```python
from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="key")
print(client.chat.completions.create(model="jimmy", messages=[{"role":"user","content":"Hi"}]).choices[0].message.content)
```

### Editors and agents

Any tool that lets you supply a custom OpenAI‑compatible provider should work.
You need three things:

1. **Base URL** – the root of the OpenAI API, not a specific endpoint.  Use
   `http://<host>:<port>/v1` (omit `/chat/completions`); most clients append the
   path themselves.  If you include `/chat/completions` twice you'll see
   404s like `/v1/chat/completions/chat/completions`.
2. **API key** – the secret from `.env` (or any string if auth is off).
3. **Model** – `jimmy`.

Correct Roo Code configuration example:

```json
{
  "provider": "OpenAI Compatible",
  "baseUrl": "http://localhost:8000/v1",
  "apiKey": "<your-proxy-key>",
  "model": "jimmy"
}
```

If you accidentally set the base URL to include `/chat/completions`, the agent
will produce 404 errors when it tries to call `/v1/chat/completions/chat/completions`.

## Development

```
uv run pytest tests/ -v
uv run ruff check src/
uv run ruff format src/
```

## Packaging & publishing

Build with `uv run hatch build`.  Releases are made by tagging `vX.Y.Z`;
GitHub Actions tests and publishes to PyPI using `PYPI_API_TOKEN`.

## Troubleshooting

* **Port already in use** – if the proxy fails to start with
  `address already in use`:
  ```bash
  # locate the offending PID
  sudo lsof -i :8000 -t   # substitute whatever port you were using
  # kill it (or choose a different port)
  sudo kill <pid>
  # or directly:
  sudo kill -9 $(sudo lsof -i :8000 -t)
  ```
  alternatively, set `PORT` to a free port before launching:
  ```bash
  PORT=8001 uv run chatjimmy-proxy
  ```

* **Discovery failures** – run with `HEADLESS=false` or switch to
  `mode: browser_relay` in blueprint.
* **401/403** – clear `.jimmy_blueprint.json`/`.jimmy_state.json` and
  re-run discovery; ensure `PROXY_API_KEY` is correct.

* **Slow first response** – discovery runs on startup; subsequent
  requests are fast under HTTP‑replay mode.

## License

MIT – see LICENSE.
