Metadata-Version: 2.4
Name: sovereign-engine
Version: 1.0.7
Summary: Procedural Game Engine powered by IntentShield and LogicShield (BSL 1.1)
Home-page: https://github.com/mattijsmoens/sovereign-engine
Author: SovereignShield
Author-email: contact@sovereign-shield.net
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: Other/Proprietary License
Classifier: Programming Language :: Python :: 3
Classifier: Topic :: Games/Entertainment
Classifier: Topic :: Security
Requires-Python: >=3.8
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: requests>=2.31.0
Dynamic: author
Dynamic: author-email
Dynamic: classifier
Dynamic: description
Dynamic: description-content-type
Dynamic: home-page
Dynamic: license-file
Dynamic: requires-dist
Dynamic: requires-python
Dynamic: summary

ï»¿# Sovereign Engine

The Zero-Trust Procedural Game Engine. Built securely on the **IntentShield** and **LogicShield** architectures. 

Sovereign Engine allows game developers to seamlessly integrate LLMs (Large Language Models) into their games to generate procedural dialogue, dynamic quests, and complex NPC interactions while absolutely guaranteeing that the AI's output mathematically adheres to your game's rules.

## Installation

```bash
pip install sovereign-engine
```

## Features
- **Deterministic Validation:** The `LogicShield` backend forces the LLM to output perfect JSON.
- **Action Gatekeeper:** A deeply integrated, ultra-fast `IntentShield` system intercepts the AI's intended actions.
- **Local First:** Runs purely on your local graphics card natively via `Ollama` out of the box. No internet or API keys required.

## Quick Start (Zero Config)

Sovereign Engine connects directly to Local **Ollama** by default. If Ollama is currently running on your machine, just initialize the engine.

```python
from sovereign_engine.engine import ProceduralDialogueEngine

# Defaults to localhost:11434 with llama3.1:8b
engine = ProceduralDialogueEngine()

scene = engine.generate_node(
    player_state={"level": 5, "inventory": ["Iron Sword", "Shield"]},
    recent_history=["You enter a dark cavern and hear a low growl."],
    player_action="I walk forward with my shield raised."
)

print(scene)
```

## Changing the Local Ollama Model
If you want to use a different model installed on your machine (like `mistral`, `gemma`, or `deepseek`), you simply pass the `ollama_model` parameter directly into the engine:

```python
# Tell the engine exactly which local model to hit!
engine = ProceduralDialogueEngine(ollama_model="mistral:latest")

# Or an uncensored narrative model
engine = ProceduralDialogueEngine(ollama_model="hf.co/HauhauCS/Qwen3.5-27B-Uncensored")
```

## Customizing Rules (LogicShield) & Actions (IntentShield)

You can easily expand the engine to enforce your own custom mechanics mathematically, or allow new verbs for the AI to take (like `CRAFT` or `STEAL`).

```python
from sovereign_engine.logicshield.rules import Rule

my_rules = [
    Rule.less_than("gold_dropped", "max_loot")
]

engine = ProceduralDialogueEngine(
    ollama_model="llama3.1:8b",
    custom_actions=["STEAL", "CRAFT", "PERSUADE"], 
    custom_rules=my_rules
)
```

## Cloud API Fallbacks
If you don't want to run locally via Ollama, you can fallback to the OpenRouter enterprise cloud by passing your key string directly:
```python
# The Engine immediately routes to the cloud if a key is detected
engine = ProceduralDialogueEngine(api_key="sk-or-your-developer-key")
```

## License
This project is licensed under the **Business Source License 1.1 (BSL 1.1)**. 
For production or commercial use, you must obtain a separate commercial license from SovereignShield. 
