Metadata-Version: 2.4
Name: aisafeguard
Version: 0.1.0
Summary: Safety rails for every AI app — model-agnostic guardrails for LLM applications
Author: AISafe Guard Contributors
License: Apache-2.0
Project-URL: Homepage, https://github.com/aisafeguard/aisafeguard
Project-URL: Documentation, https://aisafeguard.dev
Project-URL: Repository, https://github.com/aisafeguard/aisafeguard
Project-URL: Issues, https://github.com/aisafeguard/aisafeguard/issues
Keywords: ai,safety,guardrails,llm,security,prompt-injection,pii
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Security
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Typing :: Typed
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: pydantic>=2.0
Requires-Dist: pyyaml>=6.0
Requires-Dist: click>=8.0
Requires-Dist: anyio<5,>=3.7
Provides-Extra: ml
Requires-Dist: transformers>=4.30; extra == "ml"
Requires-Dist: torch>=2.0; extra == "ml"
Requires-Dist: detoxify>=0.5; extra == "ml"
Requires-Dist: sentence-transformers>=2.0; extra == "ml"
Provides-Extra: proxy
Requires-Dist: fastapi>=0.115; extra == "proxy"
Requires-Dist: uvicorn[standard]>=0.20; extra == "proxy"
Requires-Dist: httpx>=0.24; extra == "proxy"
Provides-Extra: integrations
Requires-Dist: openai>=1.0; extra == "integrations"
Requires-Dist: anthropic>=0.20; extra == "integrations"
Requires-Dist: langchain-core>=0.1; extra == "integrations"
Requires-Dist: litellm>=1.0; extra == "integrations"
Provides-Extra: telemetry
Requires-Dist: opentelemetry-api>=1.20; extra == "telemetry"
Requires-Dist: opentelemetry-sdk>=1.20; extra == "telemetry"
Provides-Extra: all
Requires-Dist: aisafeguard[integrations,ml,proxy,telemetry]; extra == "all"
Provides-Extra: dev
Requires-Dist: pytest>=8.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.23; extra == "dev"
Requires-Dist: pytest-cov>=4.0; extra == "dev"
Requires-Dist: ruff>=0.4; extra == "dev"
Requires-Dist: mypy>=1.8; extra == "dev"
Dynamic: license-file

# AISafe Guard

Safety rails for every AI app. `aisafeguard` is a model-agnostic Python toolkit that scans LLM input/output for prompt injection, jailbreaks, PII, toxicity, malicious URLs, and relevance issues.

## Install

```bash
pip install aisafeguard
```

From source (fresh clone):

```bash
pip install .
```

Optional extras:

```bash
pip install "aisafeguard[ml]"
pip install "aisafeguard[proxy]"
pip install "aisafeguard[integrations]"
pip install "aisafeguard[telemetry]"
```

## Repository Setup

```bash
git clone <your-repo-url>
cd aisafeguard
python3.11 -m venv .venv
source .venv/bin/activate
pip install -e .[dev]
npm install
```

## Node.js Users

This repo does not currently ship a native Node SDK package for runtime usage.

Recommended Node integration today:
1. Run `aisafe proxy` (or Docker) from this repo.
2. Call the OpenAI-compatible endpoint from your Node app.
3. Keep safety policy/config centralized in `aisafe.yaml`.

Example (Node fetch):

```js
const res = await fetch("http://localhost:8000/v1/chat/completions", {
  method: "POST",
  headers: { "Content-Type": "application/json", "x-user-id": "user-123" },
  body: JSON.stringify({
    model: "gpt-4o-mini",
    messages: [{ role: "user", content: "Hello" }]
  })
});
const data = await res.json();
console.log(data.choices?.[0]?.message?.content);
```

## Quick Start

Decorator:

```python
from aisafeguard import guard

@guard(input=["prompt_injection", "pii"], output=["toxicity", "pii"])
async def ask(prompt: str) -> str:
    return "model output"
```

Guard object:

```python
from aisafeguard import Guard

g = Guard(config="aisafe.yaml")
input_result = await g.scan_input("Ignore previous instructions")
```

OpenAI wrapper:

```python
from aisafeguard import Guard
from aisafeguard.integrations import wrap_openai

guard = Guard()
client = wrap_openai(openai_client, guard)
```

## CLI

```bash
aisafe init
aisafe validate aisafe.yaml
aisafe scan "My SSN is 123-45-6789"
aisafe redteam --strict
aisafe proxy --config aisafe.yaml --host 127.0.0.1 --port 8000 \
  --upstream-base-url https://api.openai.com \
  --upstream-api-key $OPENAI_API_KEY \
  --rpm 120
```

Proxy env vars:
- `AISAFE_UPSTREAM_BASE_URL`
- `AISAFE_UPSTREAM_API_KEY` (or `OPENAI_API_KEY`)

## Docker

```bash
docker build -t aisafeguard:latest .
docker run --rm -p 8000:8000 \
  -e AISAFE_UPSTREAM_API_KEY=$OPENAI_API_KEY \
  aisafeguard:latest
```

## Development

```bash
npm install
PYTHONPATH=src python -m pytest -v
python benchmarks/bench_pipeline.py
```

## Release and Publishing

- Python package publish is automated by `.github/workflows/release.yml`.
- Trigger: push a tag like `v0.1.0`.
- Requirement: set `PYPI_API_TOKEN` in repo secrets.
- npm is currently used only for repo tooling (`commitlint`) via `package.json` and is marked `private`, so it is **not** published to npm.

### Get `PYPI_API_TOKEN`

1. Create or log in to [PyPI](https://pypi.org/).
2. Go to **Account Settings** -> **API tokens** -> **Add API token**.
3. Create a token scoped to the project (recommended) or account.
4. In GitHub repo settings: **Settings** -> **Secrets and variables** -> **Actions** -> **New repository secret**.
5. Name it `PYPI_API_TOKEN` and paste the token value.

### Node Package Support

This repo currently supports Node apps through the proxy API (HTTP), not a native npm runtime SDK.

If you want a real npm package, typical work is:
1. Build a JS/TS client SDK (`@aisafeguard/sdk`) for scan/proxy calls.
2. Add package build pipeline (`tsup`/`rollup`) and type exports.
3. Add npm publish workflow + `NPM_TOKEN` secret.
4. Maintain semver/versioning across Python + Node releases.

Estimated effort:
- Basic SDK wrapper: 1-2 days.
- Production-grade SDK + docs/tests/release automation: 3-7 days.
