Metadata-Version: 2.4
Name: llmshield-ai
Version: 0.1.0
Summary: Lightweight validation, repair, and retry helpers for LLM outputs.
Author: llmshield contributors
License: MIT
Keywords: ai,guardrails,json,llm,pydantic,validation
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3 :: Only
Requires-Python: >=3.9
Provides-Extra: all
Requires-Dist: anthropic>=0.100.0; extra == 'all'
Requires-Dist: jsonschema>=4; extra == 'all'
Requires-Dist: openai>=2; extra == 'all'
Requires-Dist: pydantic>=2; extra == 'all'
Requires-Dist: python-dotenv>=1; extra == 'all'
Provides-Extra: anthropic
Requires-Dist: anthropic>=0.100.0; extra == 'anthropic'
Provides-Extra: dev
Requires-Dist: anthropic>=0.100.0; extra == 'dev'
Requires-Dist: build; extra == 'dev'
Requires-Dist: jsonschema>=4; extra == 'dev'
Requires-Dist: openai>=2; extra == 'dev'
Requires-Dist: pydantic>=2; extra == 'dev'
Requires-Dist: pytest>=8; extra == 'dev'
Requires-Dist: python-dotenv>=1; extra == 'dev'
Requires-Dist: twine; extra == 'dev'
Provides-Extra: dotenv
Requires-Dist: python-dotenv>=1; extra == 'dotenv'
Provides-Extra: integrations
Requires-Dist: anthropic>=0.100.0; extra == 'integrations'
Requires-Dist: openai>=2; extra == 'integrations'
Provides-Extra: jsonschema
Requires-Dist: jsonschema>=4; extra == 'jsonschema'
Provides-Extra: openai
Requires-Dist: openai>=2; extra == 'openai'
Provides-Extra: pydantic
Requires-Dist: pydantic>=2; extra == 'pydantic'
Description-Content-Type: text/markdown

# llmshield

`llmshield` is a lightweight Python library for validating AI/LLM outputs before your app trusts them.

## Goals

- Validate JSON and structured outputs
- Validate against Pydantic models
- Validate through a small focused core API
- Detect likely secrets and simple PII patterns
- Support custom validation rules
- Produce clear errors and retry prompts for LLM repair loops
- Best-effort JSON repair
- Optional JSON Schema validation
- Provider hooks for direct LLM calls

## Installation

```bash
python3 -m pip install llmshield-ai
```

The PyPI package is `llmshield-ai`; the Python import is `llmshield`.

## Quick start

```python
from llmshield import Json, NoSecrets, validate

result = validate(
    '{"name": "Ada", "age": 32}',
    rules=[Json(), NoSecrets()],
)

if result.ok:
    print(result.value)  # parsed JSON when JsonRule is used
else:
    print(result.errors)
    print(result.to_retry_prompt())
```

## Pydantic schema validation

```python
from pydantic import BaseModel
from llmshield import validate
from llmshield.rules import JsonRule, PydanticRule

class UserProfile(BaseModel):
    name: str
    age: int
    email: str

result = validate(
    '{"name": "Ada", "age": 32, "email": "ada@example.com"}',
    rules=[JsonRule(), PydanticRule(UserProfile)],
)

print(result.ok)
print(result.value)
```

## JSON repair

```python
from llmshield import validate
from llmshield.rules import JsonRule

result = validate('{name: "Ada", age: 32,}', rules=[JsonRule(repair=True)])
print(result.ok)
print(result.value)
print(result.warnings)
```

## JSON Schema validation

```python
from llmshield import validate
from llmshield.rules import JsonRule, JsonSchemaRule

schema = {
    "type": "object",
    "required": ["name", "age"],
    "properties": {
        "name": {"type": "string"},
        "age": {"type": "integer", "minimum": 0},
    },
}

result = validate('{"name": "Ada", "age": 32}', rules=[JsonRule(), JsonSchemaRule(schema)])
```

Install JSON Schema support with:

```bash
python3 -m pip install "llmshield-ai[jsonschema]"
```

## Retry prompts

```python
from llmshield import build_json_retry_prompt

if not result.ok:
    prompt = build_json_retry_prompt(result, schema=schema)
```

## Streaming validation

```python
from llmshield.experimental import StreamingTextValidator

stream = StreamingTextValidator(max_chars=1000, detect_secrets=True)
for chunk in chunks:
    result = stream.feed(chunk)
    if not result.ok:
        break
```

## Semantic and judge rules

```python
from llmshield.experimental import ContainsAnyRule, SimilarityRule, LLMJudgeRule

# LLMJudgeRule accepts your own judge function returning bool, score, or ValidationResult.
```

## Custom rules

```python
from llmshield import validate
from llmshield.rules import CustomRule

result = validate("hello", rules=[
    CustomRule(lambda value, context: len(value) > 3, message="Output is too short")
])
```

## Hook directly into LLM calls

You can wrap any LLM function so validation happens immediately after the model call. If validation fails, llmshield can retry with an error-aware correction prompt.

```python
from llmshield import Guard, Json


def call_llm(prompt):
    # Replace this with OpenAI, Anthropic, Ollama, etc.
    return '{"answer": "Paris"}'

shield = Guard(rules=[Json()], max_retries=1)
result = shield.call(call_llm, prompt="Return JSON with an answer field.")

if result.ok:
    print(result.value)      # parsed/validated value
    print(result.response)   # original provider response
else:
    print(result.errors)
```

There are also provider-style wrappers:

```python
from llmshield import Json
from llmshield.integrations.openai import OpenAIChatShield

shielded_chat = OpenAIChatShield(client, rules=[Json()], max_retries=1)
result = shielded_chat.create(model="gpt-4.1-mini", messages=[...])
```

See `examples/14_hook_generic_llm_call.py`, `examples/15_hook_openai_style_client.py`, and `examples/16_hook_anthropic_style_client.py`.

## Detailed examples

See [`docs/usage.md`](docs/usage.md) for detailed explanations and [`examples/`](examples/) for runnable code examples covering:

- text validation
- JSON validation and repair
- Pydantic and JSON Schema validation
- PII and secret detection
- regex and custom business rules
- retry prompts
- streaming validation
- semantic and LLM-judge validation
- OpenAI and Anthropic response helpers
- production-style retry loops
- direct hooks into generic, OpenAI-style, and Anthropic-style LLM calls

## Status

Early MVP. The core API focuses on structured LLM output validation, JSON repair, retries, safety checks, and direct LLM-call guards. Streaming and semantic checks live under `llmshield.experimental`.
