Metadata-Version: 2.4
Name: llm-grill
Version: 0.1.4
Summary: CLI for benchmarking LLM inference servers (vLLM, SGLang, llama.cpp)
Project-URL: Homepage, https://github.com/fisheatfish/llm-grill
Project-URL: Repository, https://github.com/fisheatfish/llm-grill
Project-URL: Issues, https://github.com/fisheatfish/llm-grill/issues
Project-URL: Changelog, https://github.com/fisheatfish/llm-grill/blob/main/CHANGELOG.md
Author: Karim Sayadi, Gireg Roussel
License:                                  Apache License
                                   Version 2.0, January 2004
                                http://www.apache.org/licenses/
        
           TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
        
           1. Definitions.
        
              "License" shall mean the terms and conditions for use, reproduction,
              and distribution as defined by Sections 1 through 9 of this document.
        
              "Licensor" shall mean the copyright owner or entity authorized by
              the copyright owner that is granting the License.
        
              "Legal Entity" shall mean the union of the acting entity and all
              other entities that control, are controlled by, or are under common
              control with that entity. For the purposes of this definition,
              "control" means (i) the power, direct or indirect, to cause the
              direction or management of such entity, whether by contract or
              otherwise, or (ii) ownership of fifty percent (50%) or more of the
              outstanding shares, or (iii) beneficial ownership of such entity.
        
              "You" (or "Your") shall mean an individual or Legal Entity
              exercising permissions granted by this License.
        
              "Source" form shall mean the preferred form for making modifications,
              including but not limited to software source code, documentation
              source, and configuration files.
        
              "Object" form shall mean any form resulting from mechanical
              transformation or translation of a Source form, including but
              not limited to compiled object code, generated documentation,
              and conversions to other media types.
        
              "Work" shall mean the work of authorship made available under
              the License, as indicated by a copyright notice that is included in
              or attached to the work (an example is provided in the Appendix below).
        
              "Derivative Works" shall mean any work, whether in Source or Object
              form, that is based on (or derived from) the Work and for which the
              editorial revisions, annotations, elaborations, or other modifications
              represent, as a whole, an original work of authorship. For the purposes
              of this License, Derivative Works shall not include works that remain
              separable from, or merely link (or bind by name) to the interfaces of,
              the Work and Derivative Works thereof.
        
              "Contribution" shall mean, as submitted to the Licensor for inclusion
              in the Work by the copyright owner or by an individual or Legal Entity
              authorized to submit on behalf of the copyright owner. For the purposes
              of this definition, "submitted" means any form of electronic, verbal,
              or written communication sent to the Licensor or its representatives,
              including but not limited to communication on electronic mailing lists,
              source code control systems, and issue tracking systems that are managed
              by, or on behalf of, the Licensor for the purpose of discussing and
              improving the Work, but excluding communication that is conspicuously
              marked or designated in writing by the copyright owner as "Not a
              Contribution."
        
              "Contributor" shall mean Licensor and any Legal Entity on behalf of
              whom a Contribution has been received by the Licensor and included
              within the Work.
        
           2. Grant of Copyright License. Subject to the terms and conditions of
              this License, each Contributor hereby grants to You a perpetual,
              worldwide, non-exclusive, no-charge, royalty-free, irrevocable
              copyright license to reproduce, prepare Derivative Works of,
              publicly display, publicly perform, sublicense, and distribute the
              Work and such Derivative Works in Source or Object form.
        
           3. Grant of Patent License. Subject to the terms and conditions of
              this License, each Contributor hereby grants to You a perpetual,
              worldwide, non-exclusive, no-charge, royalty-free, irrevocable
              (except as stated in this section) patent license to make, have made,
              use, offer to sell, sell, import, and otherwise transfer the Work,
              where such license applies only to those patent claims licensable
              by such Contributor that are necessarily infringed by their
              Contribution(s) alone or by the combination of their Contribution(s)
              with the Work to which such Contribution(s) was submitted. If You
              institute patent litigation against any entity (including a cross-claim
              or counterclaim in a lawsuit) alleging that the Work or any
              Contribution embodied within the Work constitutes direct or contributory
              patent infringement, then any patent licenses granted to You under
              this License for that Work shall terminate as of the date such
              litigation is filed.
        
           4. Redistribution. You may reproduce and distribute copies of the
              Work or Derivative Works thereof in any medium, with or without
              modifications, and in Source or Object form, provided that You
              meet the following conditions:
        
              (a) You must give any other recipients of the Work or Derivative
                  Works a copy of this License; and
        
              (b) You must cause any modified files to carry prominent notices
                  stating that You changed the files; and
        
              (c) You must retain, in the Source form of any Derivative Works
                  that You distribute, all copyright, patent, trademark, and
                  attribution notices from the Source form of the Work,
                  excluding those notices that do not pertain to any part of
                  the Derivative Works; and
        
              (d) If the Work includes a "NOTICE" text file as part of its
                  distribution, You must include a readable copy of the
                  attribution notices contained within such NOTICE file, in
                  at least one of the following places: within a NOTICE text
                  file distributed as part of the Derivative Works; within
                  the Source form or documentation, if provided along with the
                  Derivative Works; or, within a display generated by the
                  Derivative Works, if and wherever such third-party notices
                  normally appear. The contents of the NOTICE file are for
                  informational purposes only and do not modify the License.
                  You may add Your own attribution notices within Derivative
                  Works that You distribute, alongside or in addition to the
                  NOTICE text from the Work, provided that such additional
                  attribution notices cannot be construed as modifying the License.
        
              You may add Your own license statement for Your modifications and
              may provide additional grant of rights to use, copy, modify, merge,
              publish, distribute, sublicense, and/or sell copies of the
              Contribution, either on its own or as part of the Work, to the extent
              that Your legal or patent rights permit.
        
           5. Submission of Contributions. Unless You explicitly state otherwise,
              any Contribution intentionally submitted for inclusion in the Work
              by You to the Licensor shall be under the terms and conditions of
              this License, without any additional terms or conditions.
              Notwithstanding the above, nothing herein shall supersede or modify
              the terms of any separate license agreement you may have executed
              with Licensor regarding such Contributions.
        
           6. Trademarks. This License does not grant permission to use the trade
              names, trademarks, service marks, or product names of the Licensor,
              except as required for reasonable and customary use in describing the
              origin of the Work and reproducing the content of the NOTICE file.
        
           7. Disclaimer of Warranty. Unless required by applicable law or
              agreed to in writing, Licensor provides the Work (and each
              Contributor provides its Contributions) on an "AS IS" BASIS,
              WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
              implied, including, without limitation, any conditions of title,
              MERCHANTIBILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely
              responsible for determining the appropriateness of using or
              redistributing the Work and assume any risks associated with Your
              exercise of permissions under this License.
        
           8. Limitation of Liability. In no event and under no legal theory,
              whether in tort (including negligence), contract, or otherwise,
              unless required by applicable law (such as deliberate and grossly
              negligent acts) or agreed to in writing, shall any Contributor be
              liable to You for damages, including any direct, indirect, special,
              incidental, or exemplary damages of any character arising as a
              result of this License or out of the use or inability to use the
              Work (including but not limited to damages for loss of goodwill,
              work stoppage, computer failure or malfunction, or all other
              commercial damages or losses), even if such Contributor has been
              advised of the possibility of such damages.
        
           9. Accepting Warranty or Additional Liability. While redistributing
              the Work or Derivative Works thereof, You may choose to offer,
              and charge a fee for, acceptance of support, warranty, indemnity,
              or other liability obligations and/or rights consistent with this
              License. However, in accepting such obligations, You may not impose
              any additional obligations on Licensor and may not impose
              any liability on any other Contributor for actions taken by others,
              and Licensor retains the right to modify or withdraw the Work at
              any time.
        
           END OF TERMS AND CONDITIONS
        
           Copyright 2026 llm-grill contributors
        
           Licensed under the Apache License, Version 2.0 (the "License");
           you may not use this file except in compliance with the License.
           You may obtain a copy of the License at
        
               http://www.apache.org/licenses/LICENSE-2.0
        
           Unless required by applicable law or agreed to in writing, software
           distributed under the License is distributed on an "AS IS" BASIS,
           WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
           See the License for the specific language governing permissions and
           limitations under the License.
License-File: LICENSE
Keywords: benchmark,inference,llamacpp,llm,sglang,vllm
Classifier: Development Status :: 3 - Alpha
Classifier: Environment :: Console
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: System :: Benchmark
Classifier: Typing :: Typed
Requires-Python: >=3.11
Requires-Dist: anyio>=4
Requires-Dist: httpx>=0.27
Requires-Dist: pydantic>=2
Requires-Dist: pyyaml>=6
Requires-Dist: rich>=13
Requires-Dist: typer>=0.12
Provides-Extra: dev
Requires-Dist: pytest-asyncio>=0.23; extra == 'dev'
Requires-Dist: pytest-cov>=5; extra == 'dev'
Requires-Dist: pytest-mock>=3; extra == 'dev'
Requires-Dist: pytest>=8; extra == 'dev'
Requires-Dist: respx>=0.21; extra == 'dev'
Requires-Dist: ruff>=0.4; extra == 'dev'
Description-Content-Type: text/markdown

# llm-grill

[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](LICENSE)
[![Python](https://img.shields.io/badge/python-3.11%2B-blue.svg)](https://www.python.org/)
[![CI](https://github.com/fisheatfish/llm-grill/actions/workflows/ci.yml/badge.svg)](https://github.com/fisheatfish/llm-grill/actions/workflows/ci.yml)

CLI for benchmarking LLM inference servers: vLLM, SGLang, llama.cpp, LiteLLM.

Measures **TTFT**, **TPOT**, **end-to-end latency**, **throughput**, **success rate**, **KV cache quality metrics**, and **load ramp** (breaking-point detection) on multi-turn conversation scenarios.

![llm-grill Demo](docs/demo.gif)

---

## Install

Requires **Python 3.11+** and [uv](https://docs.astral.sh/uv/).

```bash
uv tool install llm-grill
```

Verify:

```bash
llm-grill --version
```

---

## Quick start

Copy the example scenario and adapt it to your setup:

```bash
cp scenarios/example.yaml scenarios/my-bench.yaml
# Edit URLs, model name, and API key
```

**1. Check connectivity**

```bash
llm-grill ping scenarios/my-bench.yaml
```

**2. Run a benchmark**

```bash
llm-grill run scenarios/my-bench.yaml --output results.jsonl
```

After the run, tables are printed automatically:

- **Benchmark Summary** — latency, throughput, success rate per server/model
- **Conversation Quality Metrics** — KV cache hit rate, turn-to-turn latency ratio, context growth factor
- **Load Ramp Results** — (if `ramp_levels` is set) one row per (server, model, concurrency level)

**3. Generate a report from an existing results file**

```bash
# Terminal table (summary + conversation metrics)
llm-grill report results.jsonl

# JSON (both sections, pipeable)
llm-grill report results.jsonl --format json

# CSV (raw requests, pandas-ready)
llm-grill report results.jsonl --format csv --output summary.csv

# Hide conversation metrics table
llm-grill report results.jsonl --no-conversations
```

---

## Commands

| Command | Description |
|---|---|
| `llm-grill run <scenario>` | Run a benchmark, stream results to JSONL |
| `llm-grill ping <scenario>` | Test server connectivity |
| `llm-grill show-scenario <scenario>` | Validate and display a scenario |
| `llm-grill report <results.jsonl>` | Generate a report from a results file |

### `run` options

| Option | Default | Description |
|---|---|---|
| `--output / -o` | `results-<name>.jsonl` | Output file path |
| `--format / -f` | `jsonl` | `jsonl` or `csv` |
| `--quiet / -q` | off | Suppress progress and tables |

### `report` options

| Option | Default | Description |
|---|---|---|
| `--format / -f` | `table` | `table`, `json`, or `csv` |
| `--output / -o` | — | Output path for CSV format |
| `--no-conversations` | off | Hide the conversation metrics table |

### Global options

| Option | Description |
|---|---|
| `--verbose / -v` | Enable debug logging |
| `--version / -V` | Print version and exit |

---

## Supported backends

| Backend | Type | Metrics source | Notes |
|---|---|---|---|
| [vLLM](https://github.com/vllm-project/vllm) | `vllm` | Prometheus `/metrics` | KV cache usage |
| [SGLang](https://github.com/sgl-project/sglang) | `sglang` | Prometheus `/metrics` | Cache hit rate |
| [llama.cpp](https://github.com/ggerganov/llama.cpp) | `llamacpp` | `/health` endpoint | GGUF models |
| [LiteLLM](https://github.com/BerriAI/litellm) | `litellm` | Gateway routing | Proxy for multiple backends |
| OpenAI-compatible | `openai` | — | Reuses vLLM client |

---

## Scenario format (YAML)

```yaml
name: my-scenario
description: Optional description

backends:
  - name: gpu-vllm
    url: http://gpu-vllm:8000
    api_key: none                    # "none", a literal key, or ${ENV_VAR}
    type: vllm                       # vllm | sglang | llamacpp | litellm | openai
    timeout: 120.0

models:
  - name: devstral-small-2-24b
    max_tokens: 512
    temperature: 0.0

conversations:
  - name: multi-turn-debug
    turns:
      - role: system
        content: "You are an expert developer."
      - role: user
        content: "My FastAPI app returns 500 errors under load. What should I check?"
      - role: user
        content: "The DB connection pool is exhausted. How do I configure it in SQLAlchemy?"

targets:
  - backend: gpu-vllm
    model: devstral-small-2-24b
    conversation: multi-turn-debug

load:
  concurrent_users: 10
  iterations: 3
  ramp_up_seconds: 5.0
  think_time_seconds: 0.0
```

Each `role: user` turn triggers an inference request. Conversation history (including assistant responses) is carried forward, so the server sees a growing context.

### Load ramp

Add `ramp_levels` to sweep concurrency levels in a single run. When set, `concurrent_users` is ignored.

```yaml
load:
  iterations: 3
  ramp_levels: [1, 5, 10, 20, 50, 100]
  ramp_pause_seconds: 10.0   # pause between levels, default 10 s
  think_time_seconds: 0.0
```

Results are tagged with `concurrent_users_level` in the JSONL output and displayed in a **Load Ramp Results** table sorted by `(server, model, users)`.

---

## Metrics

### Latency & throughput

| Metric | Description |
|---|---|
| **TTFT** | Time to First Token — from request sent to first token received (client-side, includes network) |
| **TPOT** | Time Per Output Token — `(E2E - TTFT) / (completion_tokens - 1)` |
| **E2E latency** | Total time from request to last token |
| **tokens/s** | `completion_tokens / E2E latency` (per request) or total across all requests / benchmark duration |
| **success rate** | % of requests completed without error |

```
t0       → request sent
t_first  → first non-empty content chunk received
t_last   → stream ends ([DONE] or connection close)

TTFT   = t_first - t0
E2E    = t_last  - t0
TPOT   = (E2E - TTFT) / max(completion_tokens - 1, 1)
```

Measurement includes network round-trip. For cross-server comparisons, run from the same network location.

### Conversation quality (multi-turn)

Computed per `(server, model, conversation)` group:

| Metric | Description | Interpretation |
|---|---|---|
| **Turn-to-Turn Ratio** | `mean(TTFT turn > 0) / mean(TTFT turn 0)` | < 1 → KV cache reduces prefill time |
| **Context Growth Factor** | `mean(E2E last turn) / mean(E2E first turn)` | > 1 → latency increases with context |
| **KV Cache Hit Rate** | Prompt tokens served from cache | SGLang only (Prometheus) |
| **KV Cache Usage** | GPU KV cache capacity used | vLLM only (Prometheus) |

### GPU monitoring

Enable per-backend GPU metrics (utilization, memory, temperature, power) collected via SSH:

```yaml
backends:
  - name: gpu-vllm
    url: http://gpu-vllm:8000
    type: vllm
    gpu_monitoring: true
    ssh_host: gpu-vllm       # defaults to URL host if omitted
    ssh_user: root            # default
```

Requires `nvidia-smi` on the target host and SSH key-based access.

---

## Output format (JSONL)

One JSON object per request, written incrementally:

```json
{
  "scenario": "my-scenario",
  "target_server": "gpu-vllm",
  "target_model": "devstral-small-2-24b",
  "conversation": "multi-turn-debug",
  "turn": 1,
  "iteration": 0,
  "user_id": 3,
  "timestamp_start": "2026-03-10T14:00:00+00:00",
  "ttft_s": 0.142,
  "tpot_s": 0.018,
  "e2e_latency_s": 1.23,
  "prompt_tokens": 45,
  "completion_tokens": 64,
  "tokens_per_second": 52.0,
  "success": true,
  "error": null,
  "kv_cache_usage": 0.34,
  "requests_running": 8.0,
  "concurrent_users_level": 10
}
```

The file is valid even if the benchmark is interrupted — each line is a complete record.

**Read with pandas:**

```python
df = pd.read_json("results.jsonl", lines=True)
df.groupby("target_server")[["ttft_s", "e2e_latency_s", "tokens_per_second"]].mean()
```

**Read with polars:**

```python
df = pl.read_ndjson("results.jsonl")
df.group_by("target_server").agg(pl.col("ttft_s").mean())
```

---

## API keys

Use `${ENV_VAR}` syntax to read from environment variables at load time:

```yaml
backends:
  - name: gateway
    url: http://my-litellm-proxy:4000
    api_key: ${LITELLM_API_KEY}
    type: litellm
```

```bash
export LITELLM_API_KEY="sk-..."
llm-grill run scenarios/my-scenario.yaml
```

Never commit literal API keys in scenario files.

---

## LiteLLM gateway routing

When backends are behind a LiteLLM proxy, define one backend entry for the gateway and use **model aliases** to route:

```yaml
backends:
  - name: gateway
    url: http://my-litellm-proxy:4000
    api_key: ${LITELLM_API_KEY}
    type: litellm

models:
  - name: devstral-small-llama    # LiteLLM alias → llama.cpp
    max_tokens: 512
  - name: devstral-small-vllm     # LiteLLM alias → vLLM
    max_tokens: 512

targets:
  - backend: gateway
    model: devstral-small-llama
    conversation: short-code-question
  - backend: gateway
    model: devstral-small-vllm
    conversation: short-code-question
```

Aliases must match `model_name` values in LiteLLM's `config.yaml`.

---

## Troubleshooting

| Problem | Fix |
|---|---|
| `ModuleNotFoundError: llm_grill` | Run `make install` |
| `ValidationError` on scenario load | Run `llm-grill show-scenario file.yaml` for details |
| TTFT always < 1 ms | Server not streaming — check `stream: true` support |
| All requests `connection refused` | Run `llm-grill ping file.yaml` — check URL/port |
| `401 Unauthorized` | Set `api_key: ${MY_VAR}` and export the variable |
| `ping` times out on LiteLLM | LiteLLM `/health` does live inference — check gateway URL |

---

## Contributing

See [CONTRIBUTING.md](CONTRIBUTING.md).

## License

Apache 2.0 — see [LICENSE](LICENSE).
