Metadata-Version: 2.4
Name: rule-lab
Version: 0.1.2
Summary: WinstonRedGuard local-first deterministic rule evaluation engine
Author-email: Yakuphan <yakuphan.yucel11@gmail.com>
License-Expression: MIT
Project-URL: Homepage, https://github.com/yakuphanycl/WinstonRedGuard
Requires-Python: >=3.10
Description-Content-Type: text/markdown
Provides-Extra: dev
Requires-Dist: pytest>=8; extra == "dev"

# rule-lab

A lightweight, local-first, deterministic rule evaluation engine for Python.

Define rules in JSON, evaluate them against any context, detect conflicts, and simulate outcomes — with zero external dependencies.

[![PyPI version](https://img.shields.io/pypi/v/rule-lab.svg)](https://pypi.org/project/rule-lab/)
[![Python 3.10+](https://img.shields.io/pypi/pyversions/rule-lab.svg)](https://pypi.org/project/rule-lab/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

## Installation
```bash
pip install rule-lab
```

## Quick Start
```python
from rule_lab import load_rules_from_dict, evaluate_rules

ruleset = {
    "rules": [
        {
            "rule_id": "r1",
            "name": "Block high risk",
            "conditions": [{"field": "risk_score", "op": "gt", "value": 80}],
            "action": "block",
            "priority": 10
        }
    ]
}

rules = load_rules_from_dict(ruleset)
result = evaluate_rules(rules, context={"risk_score": 95})

print(result.matched_count)    # 1
print(result.results[0].action)  # block
```

## CLI
```bash
# Validate a rule file
rule-lab validate --rules rules.json

# Simulate rules against a list of contexts
rule-lab simulate --rules rules.json --contexts contexts.json

# Detect conflicting rules
rule-lab diff --rules rules.json
```

## API Reference

| Function | Description |
|---|---|
| `load_rules_from_file(path)` | Load rules from a JSON file |
| `load_rules_from_dict(data)` | Load rules from a dict |
| `load_rules_from_list(rules)` | Load rules from a list |
| `evaluate_rule(rule, context)` | Evaluate a single rule |
| `evaluate_rules(rules, context)` | Evaluate a list of rules |
| `simulate(rules, contexts)` | Simulate multiple contexts |

## Rule Format
```json
{
  "rules": [
    {
      "rule_id": "unique-id",
      "name": "Human readable name",
      "conditions": [
        {"field": "score", "op": "gt", "value": 50}
      ],
      "action": "approve",
      "priority": 10,
      "tags": ["finance", "v1"],
      "metadata": {}
    }
  ]
}
```

## Use Cases

- **AI release gating** — validate LLM app outputs before production
- **Policy enforcement** — define and run compliance rules as code
- **Decision engines** — replace hardcoded if/else logic with JSON rules
- **Audit trails** — every rule evaluation is traceable and reproducible

## Design Principles

- **Zero dependencies** — stdlib only, no surprise installs
- **Deterministic** — same input always produces same output
- **Local-first** — no network calls, no cloud required
- **Testable** — every rule is independently verifiable

## License

MIT — built by [WinstonRed](https://github.com/yakuphanycl/WinstonRedGuard)
