Metadata-Version: 2.4
Name: k8s-security-agent
Version: 0.3.0
Summary: Agentic Kubernetes security scanner powered by an LLM
License: MIT
Requires-Python: >=3.11
Requires-Dist: kubernetes>=35.0.0
Requires-Dist: openai>=2.32.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: python-dotenv>=1.2.2
Requires-Dist: rich>=15.0.0
Description-Content-Type: text/markdown

# k8s-security-agent

An agentic Kubernetes security scanner powered by an LLM. Connect it to any cluster and chat with it to find misconfigurations, RBAC issues, exposed secrets, and more.

## Prerequisites

- Python 3.11+
- [uv](https://docs.astral.sh/uv/getting-started/installation/)
- A running Kubernetes cluster reachable via your kubeconfig (any flavor: EKS, GKE, AKS, kind, k3d, minikube, Docker Desktop, etc.)
- An API key for any OpenAI-compatible LLM endpoint

> Don't have a cluster handy? See [Optional: spin up a local cluster with k3d](#optional-spin-up-a-local-cluster-with-k3d) below.

## LLM Providers

The agent talks to any **OpenAI-compatible** endpoint — point `LLM_BASE_URL` at the provider of your choice. A few known-good options:

| Provider  | `LLM_BASE_URL`                                            | Example `LLM_MODEL`        | Get API Key            |
|-----------|-----------------------------------------------------------|----------------------------|------------------------|
| Groq      | `https://api.groq.com/openai/v1`                          | `llama-3.3-70b-versatile`  | console.groq.com       |
| Mistral   | `https://api.mistral.ai/v1`                               | `mistral-large-latest`     | console.mistral.ai     |
| Gemini    | `https://generativelanguage.googleapis.com/v1beta/openai/`| `gemini-2.0-flash`         | aistudio.google.com    |
| Anthropic | `https://api.anthropic.com/v1`                            | `claude-sonnet-4-6`        | console.anthropic.com  |
| OpenAI    | `https://api.openai.com/v1`                               | `gpt-4o`                   | platform.openai.com    |
| Ollama    | `http://localhost:11434/v1`                               | `llama3.1`                 | (local, no key needed) |

Any other OpenAI-spec compatible host works the same way — drop in its base URL, model, and API key.

## Installation

Install the CLI globally with `uv` (recommended) or `pipx`:

```bash
uv tool install k8s-security-agent
```

```bash
pipx install k8s-security-agent
```

This puts a `k8s-security-agent` command on your PATH.

### From source (for development)

```bash
git clone https://github.com/JOSHUAJEBARAJ/k8-security-agent
cd k8-security-agent
uv sync
```

## Configure the LLM

The agent reads three required env vars: `LLM_BASE_URL`, `LLM_MODEL`, and `LLM_API_KEY`. If any are missing the agent will exit with an error telling you what to set.

Export them in your shell:
```bash
export LLM_BASE_URL=https://api.mistral.ai/v1
export LLM_MODEL=mistral-large-latest
export LLM_API_KEY=your_api_key_here
```

Or, if running from source, copy `.env.example` to `.env` and fill it in:
```bash
cp .env.example .env
```
```bash
# .env
LLM_BASE_URL=https://api.mistral.ai/v1
LLM_MODEL=mistral-large-latest
LLM_API_KEY=your_api_key_here
```

> `.env` is only loaded when you run from a checkout of this repo. If you installed the package globally, export the vars in your shell (or your shell rc) instead.

**Switching providers or models** — change `LLM_BASE_URL`, `LLM_MODEL`, and `LLM_API_KEY` to point at any OpenAI-compatible endpoint. No code changes needed.

> The agent relies on **tool/function calling**, so any model you pick must support it. Most "instruct" or "chat" flagship models do; small/older models often don't.

## Optional: spin up a local cluster with k3d

If you don't already have a cluster, [k3d](https://k3d.io/#installation) is the quickest way to get one running locally (requires Docker).

```bash
# Install k3d
brew install k3d

# Create a cluster
k3d cluster create k8s-security-test --agents 2

# Verify it's running
kubectl get nodes
```

Any other local option works just as well — `kind`, `minikube`, or Docker Desktop's built-in Kubernetes. The agent only needs `kubectl` to be able to reach the cluster.

## Running the agent

If you installed via `uv tool` or `pipx`:
```bash
k8s-security-agent
```

If you're running from a source checkout:
```bash
uv run k8s-security-agent
```

## Usage

Just type naturally — the agent decides which checks to run based on your question.

```
you> run a full security audit
you> what pods are running in the default namespace?
you> scan the nginx pod for vulnerabilities
you> show me all RBAC issues
you> are there any privileged containers?
you> list all namespaces
```

Type `exit` or press `Ctrl+C` to quit.

## Deploying a vulnerable workload for testing

A sample deployment with intentional misconfigurations is included:

```bash
kubectl apply -f sample-deployment.yaml
```

Then ask the agent to scan it:
```
you> scan the vulnerable-app pod for security issues
```

To clean up:
```bash
kubectl delete -f sample-deployment.yaml
```

## Security checks

| Check | What it detects |
|---|---|
| `privileged` | Privileged containers, allowPrivilegeEscalation |
| `rbac` | cluster-admin bindings, wildcard role grants |
| `secrets` | Hardcoded secrets in env vars |
| `network` | Namespaces missing NetworkPolicy |
| `resources` | Containers with no CPU/memory limits |
| `apparmor` | Missing AppArmor profile annotations |
| `automount` | Default SA with auto-mounted tokens |
| `capabilities` | Missing capability drops, dangerous adds |
| `hostns` | hostNetwork, hostPID, hostIPC enabled |
| `image` | Unpinned or latest image tags |
| `mounts` | Sensitive host path mounts |
| `nonroot` | Missing runAsNonRoot |
| `rootfs` | readOnlyRootFilesystem not set |
| `seccomp` | Missing seccomp profile |
