Metadata-Version: 2.4
Name: llm-sniffer
Version: 0.2.0
Summary: LLM Sniffer - OpenAI-compatible reverse proxy with request/response inspector
Author: january
License: MIT License
        
        Copyright (c) 2026
        
        Permission is hereby granted, free of charge, to any person obtaining a copy
        of this software and associated documentation files (the "Software"), to deal
        in the Software without restriction, including without limitation the rights
        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
        copies of the Software, and to permit persons to whom the Software is
        furnished to do so, subject to the following conditions:
        
        The above copyright notice and this permission notice shall be included in all
        copies or substantial portions of the Software.
        
        THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
        SOFTWARE.
        
Project-URL: Homepage, https://github.com/web-infra-dev/midscene-skills
Project-URL: Bug-Tracker, https://github.com/web-infra-dev/midscene-skills/issues
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: fastapi>=0.100.0
Requires-Dist: uvicorn>=0.23.0
Requires-Dist: httpx>=0.24.0
Requires-Dist: pyyaml>=6.0
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.21; extra == "dev"
Dynamic: license-file

# LLM Sniffer

LLM Sniffer is an OpenAI-compatible reverse proxy with request/response inspector.

## Features

- **Reverse Proxy**: OpenAI-compatible API proxy
- **Request/Response Inspector**: Monitor and inspect all LLM requests and responses
- **SSE Support**: Server-Sent Events for streaming responses
- **Modern UI**: Clean web interface for inspecting traffic
- **Multi-Upstream Support**: Configure multiple LLM backends
- **Dynamic Configuration**: Switch between upstreams via command line or config

## Installation

```bash
pip install llm-sniffer
```

## Quick Start

Start the proxy server with default settings:

```bash
llm-sniffer
```

Configure your LLM client to use:
```
http://127.0.0.1:7654/v1
```

Then open http://127.0.0.1:7655 in your browser to inspect requests.

## Command Line Options

### Proxy Server Options

```bash
llm-sniffer [OPTIONS]

Options:
  --upstream-url URL     Direct upstream URL (highest priority)
  --upstream-name NAME   Use upstream from configuration file
  --upstream NAME/URL    Upstream name or URL (deprecated)
  --proxy-port PORT      Proxy service port (default: 7654)
  --ui-port PORT         UI service port (default: 7655)
  --max-records N        Maximum number of records to keep (default: 200)
  --think on|off         Enable/disable thinking mode (default: on)
  --host ADDRESS         Bind address (default: 127.0.0.1)
  --params JSON          Parameters to inject into each request body
```

### Configuration Management

```bash
# Initialize default config file
llm-sniffer config init

# List all configured upstreams
llm-sniffer config list

# Set active upstream
llm-sniffer config set kimi

# Add new upstream
llm-sniffer config add myserver --url http://localhost:8080 --description "My Server"

# Remove upstream
llm-sniffer config remove myserver

# Print config file path
llm-sniffer config path
```

## Configuration File

Default config path: `~/.llm_sniffer/config.yaml`

```yaml
upstreams:
  local:
    url: http://127.0.0.1:8000
    api_key: ""
    description: Local LLM server (vLLM, Ollama, etc.)
  openai:
    url: https://api.openai.com
    api_key: ""
    description: OpenAI API
  qwen:
    url: https://dashscope.aliyuncs.com/compatible-mode
    api_key: ""
    description: Qwen (Alibaba Cloud)
  kimi:
    url: https://api.moonshot.cn
    api_key: ""
    description: Kimi (Moonshot AI)

active_upstream: local
proxy_port: 7654
ui_port: 7655
max_records: 200
think: on
host: 127.0.0.1
```

## Upstream Selection Priority

1. `--upstream-url` (command line) - Highest priority
2. `--upstream-name` (command line)
3. `--upstream` (command line) - Deprecated
4. `active_upstream` in config file - Lowest priority

## Examples

```bash
# Use specific upstream URL
llm-sniffer --upstream-url http://localhost:8080

# Use upstream from config
llm-sniffer --upstream-name kimi

# Custom ports
llm-sniffer --proxy-port 8080 --ui-port 8081

# Start with custom upstream and inject parameters
llm-sniffer --upstream-name openai --params '{"temperature": 0.7}'
```

## License

MIT License
