Metadata-Version: 2.4
Name: coreason-veritas
Version: 0.3.0
Summary: coreason_veritas is the non-negotiable governance layer of the CoReason platform (Prosperity Public License 3.0.0)
License: Proprietary
License-File: LICENSE
License-File: NOTICE
Author: Gowtham A Rao
Author-email: gowtham.rao@coreason.ai
Requires-Python: >=3.12,<3.15
Classifier: License :: Other/Proprietary License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Requires-Dist: cryptography (>=46.0.3,<47.0.0)
Requires-Dist: fastapi (>=0.128.0,<0.129.0)
Requires-Dist: httpx (>=0.28.1,<0.29.0)
Requires-Dist: loguru (>=0.7.2,<0.8.0)
Requires-Dist: opentelemetry-api (>=1.39.1,<2.0.0)
Requires-Dist: opentelemetry-exporter-otlp (>=1.39.1,<2.0.0)
Requires-Dist: opentelemetry-instrumentation-fastapi (>=0.60b1,<0.61)
Requires-Dist: pydantic (>=2.12.5,<3.0.0)
Requires-Dist: uvicorn (>=0.40.0,<0.41.0)
Project-URL: Documentation, https://github.com/CoReason-AI/coreason_veritas
Project-URL: Homepage, https://github.com/CoReason-AI/coreason_veritas
Project-URL: Repository, https://github.com/CoReason-AI/coreason_veritas
Description-Content-Type: text/markdown

# coreason_veritas

coreason_veritas is the non-negotiable governance layer of the CoReason platform.

[![CI](https://github.com/CoReason-AI/coreason_veritas/actions/workflows/ci.yml/badge.svg)](https://github.com/CoReason-AI/coreason_veritas/actions/workflows/ci.yml)

# **The Architecture and Utility of coreason_veritas**

### **1. The Philosophy (The Why)**

In the high-stakes world of biopharmaceuticals and GxP regulated environments, the probabilistic nature of standard Large Language Models represents a massive liability. `coreason_veritas` was architected to solve this specific friction point: it is the non-negotiable governance layer designed to impose **"Glass Box"** principles onto AI agents.

Our intent is to replace the inherent randomness of generative AI with **"Deterministic Equivalence"** and **"Radical Auditability"**. This package acts as a middleware "Safety Anchor," enforcing a "Lobotomy Protocol" that restricts an LLM’s creativity in favor of epistemic integrity. By cryptographically verifying the chain of custody for code and forcibly overriding stochastic parameters, we turn AI from a creative writer into a verifiable reasoning engine backed by an **Immutable Execution Record (IER)**.

### **2. Under the Hood (The Dependencies & Logic)**

The stack defined in our `pyproject.toml` is focused and lightweight, designed for integration rather than heavy computation:

*   **opentelemetry-api**: We depend on OTel to treat AI reasoning traces as critical infrastructure telemetry, enabling cloud-native, enterprise-grade observability.
*   **cryptography**: This powers our **Gatekeeper** function. We use asymmetric cryptographic verification to ensure that "Agent Specs" have not been tampered with since they were signed by a Scientific Review Board.
*   **pydantic**: Demonstrates our commitment to strict data validation and type safety, which is essential for structured data handling in GxP environments.
*   **loguru**: Used for developer ergonomics and structured logging output.

The internal logic is structured around three atomic units that execute in a specific sequence:

1.  **The Gatekeeper:** Verifies the cryptographic signature of the input asset before any code runs. If the signature is invalid, execution halts immediately.
2.  **The Auditor:** Initializes an OpenTelemetry span with mandatory attributes (User ID, Asset ID, Signature) to create the IER.
3.  **The Anchor:** Uses Python's `contextvars` to set a thread-safe flag (`_ANCHOR_ACTIVE`). It creates a scope where any LLM configuration is sanitized—forcing `temperature=0.0` and `seed=42`—effectively "lobotomizing" the model to ensure it produces the exact same output for the same input every time.

### **3. In Practice (The How)**

The most powerful way to use `coreason_veritas` is via our high-level wrapper, which bundles the Gatekeeper, Auditor, and Anchor into a single line of code.

**Example 1: The Atomic Wrapper**

This example demonstrates the "Happy Path" for protecting an asynchronous agent function. The `@governed_execution` decorator ensures that the function cannot run unless the inputs are signed, the execution is traced, and the environment is deterministic.

```python
from typing import Any, Dict
from coreason_veritas import governed_execution
from coreason_veritas.anchor import is_anchor_active

# The decorator handles the heavy lifting of verification and tracing
@governed_execution(asset_id_arg="spec", signature_arg="sig", user_id_arg="user")
async def run_clinical_analysis(spec: Dict[str, Any], sig: str, user: str) -> str:
    """
    A critical analysis function that must be auditable.
    """
    # Verify we are in a deterministic scope (The Anchor is holding)
    if is_anchor_active():
        print("System is anchored: Temperature forced to 0.0")

    # ... perform business logic ...
    return "Analysis Complete: Risk Low"

# Execution
# If 'sig' is invalid, this raises AssetTamperedError before the function body ever runs.
await run_clinical_analysis(
    spec={"trial_id": "NCT123456"},
    sig="deadbeef...",
    user="dr_who"
)
```

**Example 2: The "Lobotomy" Protocol**

For developers integrating directly with LLM clients (like OpenAI or Anthropic), the `DeterminismInterceptor` can be used explicitly to sanitize configuration payloads, ensuring no "creative" parameters slip through.

```python
from coreason_veritas.anchor import DeterminismInterceptor

interceptor = DeterminismInterceptor()

# An unsafe config that might produce hallucinations (high temp, random seed)
risky_config = {
    "model": "gpt-4",
    "temperature": 0.9,
    "top_p": 0.95,
    "seed": 999
}

# The interceptor forcibly overrides stochastic params
safe_config = interceptor.enforce_config(risky_config)

print(safe_config)
# Output:
# {
#   "model": "gpt-4",
#   "temperature": 0.0,  <-- Sanitized
#   "top_p": 1.0,        <-- Sanitized
#   "seed": 42           <-- Injected
# }
```

## Getting Started

### Prerequisites

- Python 3.12+
- Poetry

### Installation

You can install `coreason_veritas` directly from PyPI:

```sh
pip install coreason-veritas
```

Or using Poetry:

```sh
poetry add coreason-veritas
```

Alternatively, to install from source:

1.  Clone the repository:
    ```sh
    git clone https://github.com/CoReason-AI/coreason_veritas.git
    cd coreason_veritas
    ```
2.  Install dependencies:
    ```sh
    poetry install
    ```

### Development

-   Run the linter:
    ```sh
    poetry run pre-commit run --all-files
    ```
-   Run the tests:
    ```sh
    poetry run pytest
    ```

## License

This project is licensed under the Prosperity Public License 3.0.0. See the [LICENSE](LICENSE) file for details.

