Metadata-Version: 2.4
Name: llmfaker
Version: 0.1.0
Summary: In-process LLM client faker that patches OpenAI, Anthropic, LiteLLM, and LangChain for fast, deterministic testing without network calls
Author-email: Raja CSP <raja.csp@gmail.com>
License-Expression: Apache-2.0
Project-URL: Homepage, https://github.com/rajacsp/llmfaker
Project-URL: Repository, https://github.com/rajacsp/llmfaker
Keywords: mock,llm,openai,anthropic,testing
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Topic :: Software Development :: Testing
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: fastapi>=0.68.0
Requires-Dist: uvicorn>=0.15.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: python-json-logger>=2.0.0
Requires-Dist: pyyaml>=5.4.1
Requires-Dist: watchdog>=2.1.0
Requires-Dist: httpx>=0.28.1
Requires-Dist: tiktoken>=0.9.0
Requires-Dist: click>=8.1.0
Requires-Dist: requests>=2.32.3
Dynamic: license-file

# llmfaker

[![PyPI version](https://badge.fury.io/py/llmfaker.svg)](https://pypi.org/project/llmfaker/)
[![License](https://img.shields.io/badge/License-Apache_2.0-blue.svg)](LICENSE)

A mock server and in-process faker for OpenAI and Anthropic APIs, built for Python testing. Monkey-patches official LLM client libraries to intercept calls without network overhead.

## Features

- In-process patching of `openai`, `anthropic`, `litellm`, and `langchain` clients
- Fluent builder API for configuring responses
- Pattern matching: exact, regex, predicate-based, and template rendering
- Streaming support with realistic SSE emission (OpenAI and Anthropic formats)
- Failure injection: rate limits, timeouts, mid-stream disconnects, malformed JSON
- Latency simulation with configurable TTFT and inter-token delays
- Record/replay cassettes for integration testing
- Multi-turn conversation scripting and tool-call sequences
- Token counting and cost estimation via pricing tables
- Pytest plugin with `llm_faker` and `llm_recording` fixtures
- Standalone mock server mode via CLI

## Installation

```bash
pip install llmfaker
```

## Quick Start

### In-process (for unit tests)

```python
from llmfaker import LLMFaker
import openai

client = openai.OpenAI(api_key="fake")

with LLMFaker() as faker:
    faker.when(prompt_contains="weather").respond("It's sunny!")
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": "What's the weather?"}],
    )
    print(response.choices[0].message.content)  # "It's sunny!"
```

### Pytest plugin

```python
def test_my_feature(llm_faker):
    llm_faker.when(prompt_contains="hello").respond("Hi!")
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": "hello"}],
    )
    assert "Hi!" in response.choices[0].message.content
    assert len(llm_faker.calls) == 1
```

### Cassette record/replay

```python
def test_real_api_behavior(llm_recording):
    # First run: calls real API and records to cassette
    # Subsequent runs: replays from cassette file
    response = client.chat.completions.create(
        model="gpt-4",
        messages=[{"role": "user", "content": "hello"}],
    )
    assert response.choices[0].message.content
```

### Failure injection

```python
with LLMFaker() as faker:
    with faker.fail(rate=1.0, status=429, retry_after=30):
        # All calls will get a 429 rate limit error
        ...
```

### Standalone mock server

```bash
mockllm start --responses responses.yml --port 8000
```

## YAML Configuration

```yaml
responses:
  "what colour is the sky?": "The sky is blue due to Rayleigh scattering."
  "tell me a joke": "Why don't programmers like nature? Too many bugs!"

defaults:
  unknown_response: "I don't know the answer to that."

settings:
  lag_enabled: true
  lag_factor: 10
```

## Development

```bash
pip install -r requirements.txt
pip install -e .

# Run tests
python -m pytest tests/ -v
```

## License

[Apache-2.0](LICENSE)

---

<sub>Inspired by [mockllm](https://pypi.org/project/mockllm/).</sub>
