Metadata-Version: 2.4
Name: llm-prompt-lab
Version: 0.1.0
Summary: Test prompt variants across LLM providers with LLM-as-judge evaluation
License-Expression: MIT
License-File: LICENSE
Requires-Python: >=3.11
Requires-Dist: anthropic>=0.18.0
Requires-Dist: jinja2>=3.1.0
Requires-Dist: openai>=1.0.0
Requires-Dist: python-dotenv>=1.0.0
Requires-Dist: python-frontmatter>=1.0.0
Requires-Dist: pyyaml>=6.0
Requires-Dist: rich>=13.0.0
Requires-Dist: typer>=0.9.0
Provides-Extra: dev
Requires-Dist: pytest-asyncio>=0.23.0; extra == 'dev'
Requires-Dist: pytest>=8.0.0; extra == 'dev'
Description-Content-Type: text/markdown

# prompt-lab

Test prompt variants across LLM providers with LLM-as-judge evaluation.

## Installation

```bash
uv sync
```

Create `.env` with your API keys:

```
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
```

## Quick Start

```bash
# Create a new experiment
prompt-lab new
prompt-lab new --config spec.yaml

# Run an experiment
prompt-lab run experiments/my-experiment

# Run a single variant
prompt-lab run experiments/my-experiment/v1

# View results
prompt-lab results experiments/my-experiment/v1

# Compare variants
prompt-lab compare experiments/my-experiment
```

## How It Works

```
system.md (optional) + prompt.md + inputs.yaml → LLM → response → judge.md → score
```

1. **prompt.md** is the user message sent to the LLM (the prompt you want to evaluate)
2. **system.md** (optional) is the system message — persona, instructions, constraints
3. **inputs.yaml** provides test cases with variables for both files
4. The messages are sent to each configured **model** (LLM)
5. **judge.md** evaluates each response and assigns a score

Create multiple variants (v1, v2, etc.) to compare different prompt approaches.

## Experiment Structure

```
my-experiment/
├── experiment.md       # Config: models, runs (required)
├── judge.md            # Evaluator: scoring criteria (required)
├── inputs.yaml         # Shared test cases (optional, used by all variants)
├── v1/                 # Variant (at least one required)
│   ├── prompt.md       # User message (required)
│   ├── system.md       # System message (optional)
│   └── tools.yaml      # Tool definitions (optional)
└── v2/                 # Another variant to compare...
    ├── prompt.md        # Different prompt approach
    └── system.md        # Different system instructions
```

Both `judge.md` and `inputs.yaml` support fallback: if not found in the variant folder, the experiment-level file is used. This allows sharing test cases across variants for fair A/B comparison.

## File Formats

### experiment.md

Defines the experiment name, description, models, and default number of runs per input.

```yaml
---
name: my-experiment
description: Testing different prompt styles
hypothesis: Concise prompts will score higher than verbose ones
models:
  - openai:gpt-4o-mini
  - anthropic:claude-sonnet-4-20250514
runs: 5
---

Optional markdown content describing the experiment.
```

**Experiment options:**

| Option | Default | Description |
|--------|---------|-------------|
| `name` | folder name | Experiment identifier |
| `description` | `""` | Brief description |
| `models` | required | List of models to test |
| `runs` | `5` | Runs per input (for statistical analysis) |
| `hypothesis` | `""` | What you're testing (displayed in results) |

### prompt.md (user message)

The user message sent to the LLM. Use `{{ variables }}` to inject test data from `inputs.yaml`.

```markdown
Generate 5 creative product names.

Product description: {{ description }}
Seed words: {{ seeds }}

Product names:
```

Each variant folder contains a different `prompt.md` to compare approaches (e.g., zero-shot vs few-shot, formal vs casual tone, etc.).

For hardcoded prompts without variables, just write the prompt directly:

```markdown
Tell me a joke about programming.
```

### system.md (optional system message)

If present, becomes the system message for the LLM call. Uses the same `{{ variables }}` from `inputs.yaml`.

```markdown
You are a helpful assistant. You can check the weather using the get_weather tool when users ask about weather conditions.

Only use the weather tool when the user is actually asking about weather. For other questions, just respond normally without using any tools.
```

When `system.md` is absent, the prompt is sent as the user message with no system message.

**When to use system.md:**

- Setting a persona or role for the LLM
- Providing instructions that frame behavior (e.g., tool usage rules)
- Separating "what the model is" from "what the user asks"

### inputs.yaml (optional)

Test cases with variables matching the prompt and system templates. If omitted, runs once with empty data (useful for static prompts without variables).

```yaml
- id: alice
  name: Alice

- id: bob
  name: Bob
  runs: 10  # Override experiment's runs for this input
```

Each input case can have any number of fields. All fields (except `id` and `runs`) are available as `{{ variables }}` in both `prompt.md` and `system.md`.

**Location:** Can be placed at experiment level (shared across all variants) or in a variant folder (variant-specific). Variant-level inputs take precedence over experiment-level.

**Input options:**

| Field | Default | Description |
|-------|---------|-------------|
| `id` | `input-N` | Unique identifier for results |
| `runs` | experiment runs | Override runs for this specific input |
| (other) | - | Variables available in prompt and system templates |

### tools.yaml (optional)

Define tools (functions) that the LLM can call during execution. Useful for testing prompts that involve function calling.

```yaml
- name: get_weather
  description: Get current weather for a location
  parameters:
    type: object
    properties:
      location:
        type: string
        description: City name
      unit:
        type: string
        enum: [celsius, fahrenheit]
    required:
      - location

- name: search
  description: Search the web
  parameters:
    type: object
    properties:
      query:
        type: string
```

**Tool fields:**

| Field | Required | Description |
|-------|----------|-------------|
| `name` | yes | Tool/function name |
| `description` | no | What the tool does |
| `parameters` | no | JSON Schema for tool parameters |

Tool calls made by the model are captured in the response and available for judge evaluation.

### judge.md (evaluator)

Defines how to score each LLM response. The judge is another LLM that evaluates quality based on your criteria.

```yaml
---
model: openai:gpt-4o-mini
score_range: [1, 10]
temperature: 0
---

You are evaluating a greeting response.

## Rubric
- **10**: Uses user's name, warm tone, offers to help
- **8-9**: Uses name and friendly, but generic
- **6-7**: Friendly but doesn't use name
- **4-5**: Cold or overly formal
- **1-3**: Inappropriate or ignores user

**Prompt:** {{ prompt }}
**Model Response:** {{ response }}
```

**Judge options:**

| Option | Default | Description |
|--------|---------|-------------|
| `model` | `openai:gpt-4o` | Model to use for judging (single judge) |
| `models` | - | List of models for multi-judge (opt-in, see below) |
| `aggregation` | `mean` | Score aggregation: `mean` or `median` (multi-judge only) |
| `score_range` | `[1, 10]` | Min and max score |
| `temperature` | `0` | 0 = deterministic, higher = more varied |
| `chain_of_thought` | `true` | Step-by-step reasoning before scoring (disable with `false`) |

### Multi-Judge Evaluation (Opt-in)

Use multiple LLM models as judges to reduce self-enhancement bias (when a model scores itself favorably). Scores are aggregated using mean or median.

```yaml
---
models:
  - openai:gpt-4o-mini
  - anthropic:claude-sonnet-4-20250514
aggregation: mean
score_range: [1, 10]
---

## Rubric
...
```

**When to use multi-judge:**
- Testing responses from GPT models? Add Claude as a judge (and vice versa)
- Need more reliable scores? Multiple perspectives reduce bias
- High-stakes evaluations where accuracy matters

**Trade-offs:**
- Requires API keys for multiple providers
- 2x API costs for judging
- Slightly slower execution

**Note:** Use `model:` (singular) for single judge, `models:` (plural) for multi-judge.

### Chain-of-Thought Evaluation

By default, the judge analyzes responses step-by-step before scoring. This improves alignment with human judgment by reducing anchoring bias.

To disable Chain-of-Thought (for faster/cheaper evaluations):

```yaml
---
model: openai:gpt-4o-mini
score_range: [1, 10]
chain_of_thought: false
---

## Rubric
...
```

When enabled, the judge will:
1. Review each rubric criterion
2. Analyze how the response meets each criterion
3. Identify strengths and weaknesses
4. Only then provide the final score

## Multiple Runs & Statistics

For more reliable evaluation, run each input multiple times and get statistical analysis:

```yaml
# experiment.md
---
name: my-experiment
models:
  - openai:gpt-4o-mini
runs: 5
---
```

Results show hypothesis and mean with 95% confidence interval:

```
Hypothesis: Concise prompts will score higher than verbose ones

┏━━━━━━━┳━━━━━━━━━━━━━━━━━━━━┳━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━┳━━━━━━━━━━━━━━━┓
┃ Input ┃ Model              ┃ Mean ┃ 95% CI       ┃ Range ┃ Scores        ┃
┡━━━━━━━╇━━━━━━━━━━━━━━━━━━━━╇━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━╇━━━━━━━━━━━━━━━┩
│ alice │ openai:gpt-4o-mini │ 9.2  │ (8.5-9.9)    │ 8-10  │ 9, 10, 9, 9, 9│
│ bob   │ openai:gpt-4o-mini │ 8.4  │ (7.8-9.0)    │ 8-9   │ 8, 9, 8, 8, 9 │
└───────┴────────────────────┴──────┴──────────────┴───────┴───────────────┘

⚠ Low sample size (3 runs). Consider runs: 5+ for reliable statistics.
```

When `runs > 1`:
- Cache is disabled to get independent LLM responses
- Each input is evaluated N times
- 95% confidence intervals show the reliability of your results
- Warning shown when sample size is too small for reliable statistics

## Statistical Significance

When comparing variants, the `compare` command shows whether differences are statistically significant:

```bash
prompt-lab compare experiments/my-experiment
```

```
Hypothesis: Concise prompts will score higher than verbose ones

┏━━━━━━━━━┳━━━━━━━━━━━━┳━━━━━━━━━━━━━━┳━━━━━━━━━━━━━┳━━━━━━┓
┃ Variant ┃ Mean Score ┃ 95% CI       ┃ Avg Latency ┃ Runs ┃
┡━━━━━━━━━╇━━━━━━━━━━━━╇━━━━━━━━━━━━━━╇━━━━━━━━━━━━━╇━━━━━━┩
│ v1      │ 8.5/10     │ (8.1-8.9)    │ 450ms       │ 2×5  │
│ v2      │ 7.2/10     │ (6.8-7.6)    │ 420ms       │ 2×5  │
└─────────┴────────────┴──────────────┴─────────────┴──────┘

Statistical Significance (Welch's t-test, α=0.05):

  ✓ v1 > v2 (p≤0.01)
```

This helps you know if v1 is actually better than v2, or if the difference is just noise.

## Templating

Variables from `inputs.yaml` are available in both `prompt.md` and `system.md`:

```markdown
Hello {{ name }}, you are {{ age }} years old.
```

Literal braces don't need escaping:

```markdown
Return JSON: {"result": "value"}
```

## CLI Commands

### new

Create a new experiment. Runs an interactive wizard or reads from a config file.

```bash
# Interactive wizard
prompt-lab new

# From config file
prompt-lab new --config spec.yaml
```

**Options:**

| Option | Short | Description |
|--------|-------|-------------|
| `--config` | `-c` | Path to experiment spec YAML |

### run

Run a prompt experiment. Auto-detects scope from path.

```bash
# Run all variants
prompt-lab run experiments/my-experiment

# Run single variant
prompt-lab run experiments/my-experiment/v1

# Run specific model only
prompt-lab run experiments/my-experiment/v1 --model openai:gpt-4o-mini

# Skip cache (fresh API calls)
prompt-lab run experiments/my-experiment/v1 --no-cache

# Hide progress bar
prompt-lab run experiments/my-experiment -q
```

**Options:**

| Option | Short | Description |
|--------|-------|-------------|
| `--model` | `-m` | Run only this model |
| `--no-cache` | | Disable response caching |
| `--quiet` | `-q` | Hide progress bar |

### results

Show results table for a variant.

```bash
prompt-lab results experiments/my-experiment/v1

# Show specific run
prompt-lab results experiments/my-experiment/v1 --run 2026-01-25T19-30-00
```

### compare

Compare results across all variants.

```bash
prompt-lab compare experiments/my-experiment
```

### show

Show detailed responses with judge reasoning.

```bash
# Show all responses
prompt-lab show experiments/my-experiment/v1

# Filter by input
prompt-lab show experiments/my-experiment/v1 --input alice

# Filter by model
prompt-lab show experiments/my-experiment/v1 --model openai:gpt-4o-mini

# Combine filters
prompt-lab show experiments/my-experiment/v1 --input alice --model openai:gpt-4o-mini
```

### clean

Clean experiment results. Auto-detects scope from path.

```bash
# Clean single variant results
prompt-lab clean experiments/my-experiment/v1

# Clean all variants (auto-detected from experiment path)
prompt-lab clean experiments/my-experiment

# Skip confirmation
prompt-lab clean experiments/my-experiment --yes
```

**Options:**

| Option | Short | Description |
|--------|-------|-------------|
| `--yes` | `-y` | Skip confirmation prompt |

### cache

Manage response cache.

```bash
# Clear all cached responses
prompt-lab cache clear
```

## Supported Providers

| Provider | Model format |
|----------|--------------|
| OpenAI | `openai:gpt-4o`, `openai:gpt-4o-mini` |
| Anthropic | `anthropic:claude-sonnet-4-20250514` |

## References

LLM-as-judge evaluation methodology and best practices:

- [Evidently AI - LLM-as-a-Judge Complete Guide](https://www.evidentlyai.com/llm-guide/llm-as-a-judge)
- [Sebastian Raschka - Understanding the 4 Main Approaches to LLM Evaluation](https://magazine.sebastianraschka.com/p/llm-evaluation-4-approaches)
- [Eugene Yan - Evaluating the Effectiveness of LLM-Evaluators](https://eugeneyan.com/writing/llm-evaluators/)
- [Monte Carlo - LLM-as-Judge: 7 Best Practices](https://www.montecarlodata.com/blog-llm-as-judge/)
- [Arize AI - Evidence-Based Prompting Strategies for LLM-as-a-Judge](https://arize.com/blog/evidence-based-prompting-strategies-for-llm-as-a-judge-explanations-and-chain-of-thought/)
- [A Survey on LLM-as-a-Judge (2024)](https://arxiv.org/abs/2411.15594)
- [Justice or Prejudice? Quantifying Biases in LLM-as-a-Judge](https://llm-judge-bias.github.io/)
