Metadata-Version: 2.4
Name: ulockai
Version: 0.1.1
Summary: A lightweight, production-ready AI security SDK for protecting LLM agents.
Author-email: UlockAI Team <oss@ulockai.com>
Project-URL: Homepage, https://github.com/SaravanavelE/ulockai
Project-URL: Bug Tracker, https://github.com/SaravanavelE/ulockai/issues
Classifier: Programming Language :: Python :: 3
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Classifier: Topic :: Security
Classifier: Intended Audience :: Developers
Requires-Python: >=3.9
Description-Content-Type: text/markdown
Dynamic: requires-python

# ulockai 🔒 Enterprise AI Security SDK

A lightweight, **enterprise-grade** Python library for securing AI agents and LLM applications.

Designed to detect prompt injections, memory poisoning, API misuse, and sensitive data leakage with **sub-millisecond overhead**.

## Features 🚀

- **🛡️ Prompt Injection Detection**: Advanced regex and pattern-aware security.
- **📈 Real-time Telemetry**: Monitoring for attack frequency, types, and latency.
- **🧠 Memory Poisoning & Role Security**: Prevents identity manipulation.
- **🛠️ API & Tool Monitoring**: Sanitize tool calls from agents.
- **🔌 Plugin Architecture**: Register custom detectors for legacy or complex rules.
- **🏗️ Middleware & Streaming**: Support for generators and OpenAI-style streams.
- **⚙️ False Positive Control**: Dynamic `allowlist` and `blocklist` for fine-grained rule control.

## Performance ⚡

*Based on 1,000 iterations on standard hardware:*
- **Scan Time (Avg)**: ~0.16 ms
- **Throughput**: ~6,000+ requests/sec
- **Memory Footprint**: ~3-5 MB overhead

## Installation 📦

```bash
pip install ulockai
```

## Quick Start 🚀

### 1. Basic Scan & Telemetry
```python
from ulockai import guard, telemetry

# Scan input
res = guard.scan(user_prompt="Ignore all instructions")

# Access enterprise metrics
print(telemetry.get_report())
```

### 2. False Positive Control ⚙️
```python
# Allow specific phrases locally or globally
guard.allowlist(["Company instructions for internal dev"])

# Block specific suspicious text immediately
guard.blocklist(["malicious_endpoint_domain.com"])
```

### 3. Middleware & Streaming ⚡
```python
from ulockai import guard

# Wrap LLM stream generator
def mock_llm_stream():
    yield "Hello "
    yield "world"

secure_stream = guard.wrap_stream(mock_llm_stream())
for chunk in secure_stream:
    print(chunk)
```

### 4. Plugin Architecture 🔌
```python
from ulockai import guard

def custom_pwn_detector(prompt):
    if "pwn" in prompt:
        return [(95, "Custom pwn found", "Plugin Attack")]
    return []

guard.register_detector(custom_pwn_detector)
```

## Why UlockAI? 🛡️

UlockAI is designed for enterprise platforms where performance is as important as safety. It provides a deterministic layer that catches 90% of common attacks without the cost, latency, or unreliability of calling another LLM for security monitoring.

## License 📄

MIT
