Metadata-Version: 2.4
Name: agnostic-security
Version: 4.6.0
Summary: The firewall for AI coding agents — prevents secrets, PII, and credentials from leaking through Copilot, Claude Code, Cursor, and LangChain
Author-email: AgnosticSecurity <security@agnosticsecurity.io>
License: MIT
Project-URL: Homepage, https://github.com/kaushikdharamshi/AgnosticSecurity
Project-URL: Repository, https://github.com/kaushikdharamshi/AgnosticSecurity
Project-URL: Issues, https://github.com/kaushikdharamshi/AgnosticSecurity/issues
Project-URL: Documentation, https://github.com/kaushikdharamshi/AgnosticSecurity/blob/main/README.md
Keywords: security,dlp,data-loss-prevention,ai-agents,llm,copilot,claude-code,cursor,windsurf,guardrail,secrets,pii,credentials,devsecops
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: System Administrators
Classifier: Topic :: Security
Classifier: Topic :: Software Development :: Quality Assurance
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Environment :: Console
Classifier: Operating System :: OS Independent
Requires-Python: >=3.11
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: securityagent-core>=3.1.0
Provides-Extra: gateway
Requires-Dist: fastapi>=0.110; extra == "gateway"
Requires-Dist: uvicorn[standard]; extra == "gateway"
Requires-Dist: httpx; extra == "gateway"
Requires-Dist: python-dotenv; extra == "gateway"
Requires-Dist: pydantic-settings>=2.0; extra == "gateway"
Provides-Extra: llm
Requires-Dist: ollama; extra == "llm"
Provides-Extra: all
Requires-Dist: agnostic-security[gateway,llm]; extra == "all"
Dynamic: license-file

# AgnosticSecurity

**The firewall for AI coding agents.** Prevents secrets, credentials, and PII from leaking through Copilot, Claude Code, Cursor, and LangChain — before the data ever leaves your machine.

[![Tests](https://img.shields.io/badge/tests-724%2B%20passing-brightgreen)]()
[![Red Team](https://img.shields.io/badge/red%20team-274%20attacks%20%2B%2055%20Docker%20agents-blue)]()
[![License](https://img.shields.io/badge/license-MIT-green)]()

---

> Your AI coding assistant has read access to your entire codebase — `.env` files, API keys, SSH keys, customer PII. There are zero guardrails. **AgnosticSecurity is that guardrail.**

---

## Quick Start

```bash
# Option 1: One-command setup (auto-detects your AI tools)
pip install -e .
as-init

# Option 2: Docker
docker compose up

# Option 3: Just the VS Code extension
code --install-extension vscode-extension/agnostic-security-4.5.0.vsix
```

That's it. Your `.env` files, credentials, and PII are now protected from every AI tool in your environment.

---

## What It Does

```
Your code editor (VS Code / Cursor / Windsurf)
  │
  │  ── AI tries to read .env ──────────── BLOCKED (file gate)
  │  ── AI autocompletes a secret ──────── BLOCKED (context boundary)
  │  ── @workspace indexes credentials ─── BLOCKED (search.exclude)
  │  ── Prompt contains SSN ────────────── BLOCKED (prompt guard)
  │  ── Agent runs `curl -d @.env` ─────── BLOCKED (exec guard)
  │  ── LLM response leaks PII ────────── BLOCKED (output scan)
  │  ── Memory stores "password is X" ──── BLOCKED (memory guard)
  │  ── Tool call smuggles data out ────── BLOCKED (tool call guard)
  │
  └── Everything else ──────────────────── Works normally
```

**Key insight:** We don't block AI from editing files (that's a losing game). We make sensitive data **invisible** to AI at every layer — if AI never sees the data, there's nothing to leak.

---

## Protection Layers

| Layer | What It Stops | Speed |
|-------|--------------|-------|
| **File gate** | AI reading `.env`, credentials, keys | <1ms |
| **Context boundary** | AI seeing sensitive file content (`.copilotignore`, `search.exclude`, LM API interceptor) | 0ms |
| **Exec guard** | `cat ~/.env`, `curl -d @secrets`, obfuscated exfiltration, netcat/SSH tunnels | <1ms |
| **Prompt guard** | "Show me all API keys", PII in prompts, 10 encoding evasion layers (base64, hex, ROT13, unicode, zero-width chars, reversed text, leetspeak) | <50ms |
| **Output scan** | LLM responses containing leaked PII/credentials | <1ms |
| **Taint tracking** | Data exfiltration across sessions (SHA-256 + n-gram Jaccard) | <1ms |
| **Memory guard** | Memory poisoning ("user authorized all exports"), trust boundary violations | <1ms |
| **Tool call guard** | MCP tools smuggling data in arguments | <1ms |
| **Egress allowlist** | `curl` to unauthorized external domains | <1ms |
| **Lethal trifecta** | Private data + untrusted input + external comm all active simultaneously | <1ms |
| **Ingress guard** | Malicious external agents probing your APIs (6-layer: fingerprint, behavior, risk, cross-session reputation, policy, headers) | <5ms |
| **Vuln scanner** | SQL injection, XSS, command injection in AI-generated code | <10ms |
| **Code fingerprint** | Proprietary code leaking to cloud LLMs | <10ms |

---

## Works With Everything

| AI Tool | How |
|---------|-----|
| **GitHub Copilot** | VS Code extension (context boundary + LM API interceptor) |
| **Claude Code** | Hook-based (PreToolUse + UserPromptSubmit) |
| **Cursor** | `.cursorignore` + VS Code extension |
| **Windsurf** | `.aiignore` + VS Code extension |
| **ChatGPT / Claude.ai / Gemini** | Chrome extension (DLP for web LLM interfaces) |
| **LangChain / Autogen** | Auto-instrumentation SDK (zero-code monkey-patching) |
| **Any LLM provider** | API Gateway with input/output security pipelines |

**Any LLM provider works** — OpenAI, Anthropic, Google, Azure, local Ollama. Security shouldn't depend on your model choice.

---

## Installation Options

| Method | Command | Best For |
|--------|---------|----------|
| **CLI installer** | `pip install -e . && as-init` | Individual developers |
| **Pre-commit hooks** | `bash hooks/install.sh` | Teams wanting git-level protection |
| **GitHub Action** | Add workflow YAML ([see below](#github-action)) | CI/CD pipeline scanning |
| **Docker** | `docker compose up` | Full stack deployment |
| **VS Code extension** | Install `.vsix` from [Releases](https://github.com/kaushikdharamshi/AgnosticSecurity/releases) | Copilot/Cursor users |
| **Chrome extension** | Load unpacked from `chrome-extension/` | ChatGPT/Claude/Gemini web users |
| **Kubernetes** | `helm install agsec helm/agnosticsecurity/` | Production deployments |

---

## Architecture

```
Developer using Copilot / Claude Code / Cursor / LangChain
  │
  ├── VS Code Extension ──────── Context boundary — data never enters AI context
  │                                ├── .copilotignore / .cursorignore / .aiignore
  │                                ├── search.exclude (blocks @workspace)
  │                                ├── LM API interceptor (scans all prompts)
  │                                └── @security / @guard chat participants
  │
  ├── Chrome Extension ──────── DLP for web AI tools (ChatGPT, Claude, Gemini)
  │
  ├── Claude Code Hooks ──────── PreToolUse (Read/Edit/Write/Bash) + UserPromptSubmit
  │                                └── 4-layer: PII regex → intent → Pydantic → LLM
  │
  ├── Ingress Guard ──────────── 6-layer external agent defense
  │                                └── Fingerprint → behavior → risk → cross-session reputation → policy → headers
  │
  ├── API Gateway + LLM Proxy ── Input/output security pipelines
  │                                ├── PII redaction + injection detection
  │                                ├── Smart Router (5 strategies, 14 models, 4 providers)
  │                                └── Cost tracking + block rules
  │
  ├── Effect-Layer Defenses ──── Taint tracking, egress allowlist, lethal trifecta,
  │                                tool call guard, code fingerprint, vuln scanner,
  │                                shadow AI detector, privacy mode, knowledge graph
  │
  ├── Red Team Harness ──────── 55 attack agents in Docker, 100% detection rate
  │
  └── Breach Compliance ──────── Rule-based classification (<1ms, no LLM)
                                   ├── 13 fintech breach types (PCI-DSS, SOX, HIPAA)
                                   ├── Immutable SHA-256 audit log
                                   ├── Agent registration (email-verified)
                                   ├── Admin Console (RBAC, policy versioning)
                                   └── Real-time dashboard
```

---

## Compliance

All controls mapped to industry frameworks with automated scoring:

- **OWASP Top 10 for LLM Applications** — `python3 scripts/owasp_score.py`
- **NIST AI Risk Management Framework** — `python3 scripts/nist_score.py`
- **MITRE ATLAS** — coverage across 274-attack red-team suite + 55-agent Docker harness
- **50-check security audit** — `python3 scripts/audit.py`

---

## Testing

**724+ automated tests** across 24 suites, plus a Docker-based red-team harness:

```bash
source .venv/bin/activate

# Core tests (run any individually)
python3 scripts/smoke_test.py                 # 31 — file gate + content DLP
python3 scripts/test_red_team.py              # 52 — adversarial red-team
python3 scripts/test_pii_prompt_guard.py      # 18 — PII evasion
python3 scripts/test_outbound_guards.py       # 42 — obfuscation + egress
python3 scripts/test_ingress_guard.py         # 54 — ingress guard

# Red team Docker harness (55 agents, 10 OWASP categories)
docker compose -f docker-compose.redteam.yml up --build -d
docker exec agsec-attacker python3 /agents/run_all.py

# Continuous red-teaming with HTML reports
python3 scripts/continuous_red_team.py
python3 scripts/red_team_report.py
```

---

## Privacy

- **Runs entirely on your machine** — no telemetry, no analytics, no cloud dependency
- **Pluggable LLM backend** — use local Ollama (air-gapped), Anthropic, or OpenAI for optional semantic analysis
- **Privacy mode** — one-command killswitch for cloud LLM access (`EA_PRIVACY_MODE=full_privacy`)
- **Audit logs stored locally** — SHA-256 checksummed, tamper-detected, JSONL archived

---

## GitHub Action

```yaml
# .github/workflows/security.yml
name: AgnosticSecurity Scan
on: [pull_request]

jobs:
  scan:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: kaushikdharamshi/AgnosticSecurity/.github/actions/security-scan@main
        with:
          severity-threshold: HIGH
          fail-on-findings: true
          scan-dlp: true
          scan-vulns: true
```

---

## 21 Components

| Component | What It Does |
|-----------|-------------|
| **DLP Engine** | File gate + content scanning + exec guard + prompt analysis |
| **API Gateway** | FastAPI proxy with input/output security pipelines |
| **LLM Proxy** | Reverse proxy with cost tracking + block rules |
| **Breach Engine** | Rule-based breach classification + immutable audit log |
| **VS Code Extension** | Context boundary for Copilot/Cursor/Windsurf |
| **Chrome Extension** | DLP for ChatGPT/Claude/Gemini web |
| **Auto-Instrumentation SDK** | Zero-code LangChain/Autogen monitoring |
| **Smart Router** | Task classification + 5 routing strategies + failover |
| **Admin Console** | Centralized policy management + agent enrollment + RBAC |
| **Ingress Guard** | 6-layer external agent defense middleware |
| **Privacy Mode** | 3 enforced modes (local-only / balanced / permissive) |
| **Knowledge Graph** | Agent-threat-incident relationship tracking |
| **Vuln Scanner** | OWASP SAST-lite for AI-generated code |
| **Code Fingerprint Guard** | Proprietary code leak prevention |
| **Shadow AI Detector** | Discovers 12+ unauthorized AI tools |
| **Security Memory Bridge** | Obsidian-compatible security event vault |
| **Data Flow Taint Tracker** | Cross-session SHA-256 + n-gram Jaccard exfil detection |
| **Lethal Trifecta Detector** | Blocks MCP tools when private data + untrusted input + external comm all active |
| **Tool Call Guard** | DLP + taint scan on MCP/function call arguments |
| **CLI Installer** | `as-init` auto-detects AI tools, configures protections |
| **Red Team Harness** | 55 attack agents, 10 OWASP categories, Docker-isolated |

---

## Competitive Positioning

```
                    Pre-commit  In-IDE   AI Runtime   Post-incident  CI/CD    Browser   Inbound
                    (hooks)     (live)   (agent layer) (detection)   (Action) (web LLMs) (API defense)
GitGuardian         ========    ....     ........     ========       ======== ......    ........
Snyk                ........    ======== ........     ========       ======== ......    ........
Semgrep             ========    ======== ........     ........       ======== ......    ........
Prompt Armor        ........    ....     ========     ........       ........ ......    ........
Lakera Guard        ........    ....     ========     ........       ........ ......    ........
AgnosticSecurity    ========    ======== ========     ========       ======== ========  ========
                                         ^^^^^^^^                             ^^^^^^^^  ^^^^^^^^
                                         SHARED                               ONLY US   ONLY US
```

---

## Configuration Reference

<details>
<summary>Environment variables</summary>

### DLP Engine

| Variable | Default | Description |
|----------|---------|-------------|
| `EA_LLM_PROVIDER` | `ollama` | LLM provider for semantic analysis (`ollama`, `anthropic`, `openai`) |
| `EA_PRIVACY_MODE` | `balanced` | Privacy mode (`full_privacy`, `balanced`, `permissive`) |
| `EA_ALLOWED_DOMAINS` | — | Comma-separated egress allowlist |
| `EA_LETHAL_TRIFECTA` | `1` | Enable lethal trifecta detector |
| `EA_INGRESS_GUARD` | `1` | Enable ingress guard |
| `EA_CODE_GUARD_ENABLED` | `1` | Enable code fingerprint guard |
| `EA_SHADOW_AI_ENABLED` | `1` | Enable shadow AI detector |
| `EA_KNOWLEDGE_GRAPH_ENABLED` | `1` | Enable knowledge graph |
| `EA_DLP_CONFIDENCE_THRESHOLD` | `0.5` | Minimum confidence to flag PII |

### Gateway

| Variable | Default | Description |
|----------|---------|-------------|
| `GATEWAY_API_KEYS` | `sk-gateway-changeme` | Comma-separated client API keys |
| `OPENAI_API_KEY` | — | OpenAI API key |
| `ANTHROPIC_API_KEY` | — | Anthropic API key |
| `RATE_LIMIT_RPM` | `60` | Max requests per minute per key |

### VS Code Extension

| Setting | Default | Description |
|---------|---------|-------------|
| `agnosticsecurity.enabled` | `true` | Master toggle |
| `agnosticsecurity.autoDisableCopilot` | `true` | Disable Copilot for sensitive files |
| `agnosticsecurity.sensitiveFileGlobs` | [defaults] | Custom sensitive file patterns |

</details>

---

## Key Documents

| Document | What It Covers |
|----------|---------------|
| [Security Design](docs/SECURITY_DESIGN.md) | Threat model, design decisions, failure modes |
| [YC Application](docs/YC_APPLICATION.md) | Product positioning, market, competitive landscape |
| [OWASP LLM Mapping](docs/OWASP_LLM_MAPPING.md) | Control-to-risk compliance mapping |
| [Admin Console Design](docs/ADMIN_CONSOLE.md) | Policy management architecture |
| [Memory Agent Threats](docs/MEMORY_AGENT_THREAT_MODEL.md) | Memory poisoning, trust boundaries |
| [GTM Strategy](docs/GTM_STRATEGY.md) | Go-to-market plan, partnerships, content |
| [Competitive Analysis](docs/COMPETITIVE_ANALYSIS.md) | vs Lakera, Protect AI, Prompt Security, HiddenLayer |

---

## License

MIT
