Metadata-Version: 2.1
Name: velociraptor-mcp-server
Version: 0.1.5
Summary: FastMCP server that exposes Velociraptor APIs.
Author: Velociraptor MCP maintainers
License: MIT License
        
        Copyright (c) 2025 Velociraptor MCP maintainers
        
        Permission is hereby granted, free of charge, to any person obtaining a copy
        of this software and associated documentation files (the "Software"), to deal
        in the Software without restriction, including without limitation the rights
        to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
        copies of the Software, and to permit persons to whom the Software is
        furnished to do so, subject to the following conditions:
        
        The above copyright notice and this permission notice shall be included in all
        copies or substantial portions of the Software.
        
        THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
        IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
        FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
        AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
        LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
        OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
        SOFTWARE.
        
Keywords: velociraptor,mcp,fastmcp,ir,dfir,forensics
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Information Technology
Classifier: Topic :: Security
Classifier: Topic :: System :: Monitoring
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: fastmcp
Requires-Dist: pyvelociraptor
Requires-Dist: pydantic
Requires-Dist: python-dotenv
Requires-Dist: grpcio
Requires-Dist: grpcio-tools
Requires-Dist: pandas
Provides-Extra: dev
Requires-Dist: pytest; extra == "dev"

# Velociraptor MCP Server

## Quickstart (from PyPI)
```sh
python3 -m venv .venv
. .venv/bin/activate
pip install velociraptor-mcp-server
velociraptor-mcp --config /absolute/path/to/velociraptor_lab/volumes/api/api.config.yaml
```
You need a Velociraptor mTLS API config (`api.config.yaml`). The included `velociraptor_lab` can generate one (see “Using the lab”).

A FastMCP-based server that exposes Velociraptor capabilities (VQL queries, hunts, artifacts, VFS/file ops, monitoring, alerts) over the MCP protocol for use with Codex/ChatGPT-style agents.

## Prerequisites
- Python 3.10+
- Podman (or Docker) if you want to use the included `velociraptor_lab` for local testing.
- Generated Velociraptor mTLS API config (`api.config.yaml`) – the lab can generate this for you.

## Installation (from source or dev)
Create a virtualenv (avoids macOS/Homebrew PEP 668 errors) and install:
```sh
python3 -m venv .venv
. .venv/bin/activate
pip install .                 # runtime install from source
pip install -e .[dev]         # editable install with dev deps
```
Prefer the package on PyPI for users; use editable mode for development. The legacy workflow still works if you prefer the raw requirements:
```sh
pip install -r requirements.txt
```

## Running the MCP server
After installing, you can either call the module directly or use the installed console script:
```sh
# installed entry point
velociraptor-mcp --config velociraptor_lab/volumes/api/api.config.yaml \
  --log-level INFO --server-name velociraptor-mcp

# or, from source
python3 main.py --config velociraptor_lab/volumes/api/api.config.yaml \
  --log-level INFO --server-name velociraptor-mcp
```
Options:
- `--config` or env `VELOCIRAPTOR_API_CONFIG`: path to `api.config.yaml` (default `volumes/api/api.config.yaml`)
- `--log-level` or env `MCP_LOG_LEVEL` (default `INFO`)
- `--server-name` or env `MCP_SERVER_NAME`

## Available tools (summary)
- VQL: `query_vql`
- Clients: `list_clients`, `get_client_info`, `search_clients`
- Hunts: `list_hunts`, `get_hunt_details`, `create_hunt`, `stop_hunt`, `get_hunt_results`
- Artifacts: `list_artifacts`, `collect_artifact`, `upload_artifact`, `get_artifact_definition`
- Files/VFS: `list_directory`, `get_file_info`, `download_file`
- Monitoring/Alerts: `get_server_stats`, `get_client_activity`, `list_alerts`, `create_alert`
- Resources/Prompts: artifact catalog, VQL templates, incident-response prompts

## Using the lab (recommended for development)
`velociraptor_lab/` contains a Podman/Docker stack that spins up:
- Velociraptor server (GUI + gRPC)
- Test client that should auto-enroll
- Generated mTLS configs under `velociraptor_lab/volumes/{server,client,api,datastore}`

Quick start (Podman):
```sh
cd velociraptor_lab
podman machine start                     # macOS/AppleHV
podman compose -f podman-compose.yml up --build -d
# or use the manual podman run commands in velociraptor_lab/README.md
```
Verify API:
```sh
. ../.venv/bin/activate
python test_api.py --config volumes/api/api.config.yaml
```
Then run the MCP server (see above) pointing at the generated `api.config.yaml`.

Troubleshooting lab enrollment:
- The client must reach `https://VelociraptorServer:8000/` inside the podman network. Keep hostname/alias consistent with the cert CN.
- If `clients()` is empty, delete `velociraptor_lab/volumes/*` and redeploy the lab.
- For manual probing, `python test_api.py --query "SELECT * FROM clients()"`.

## Development
- Create a fresh venv and install dev extras: `python3 -m venv .venv && . .venv/bin/activate && pip install -e .[dev]`.
- Run unit tests locally: `. .venv/bin/activate && pytest`.
- Validate API wiring against the lab: `python test_api.py --config velociraptor_lab/volumes/api/api.config.yaml --query "SELECT * FROM clients()"` (after bringing up the lab).
- Linting is minimal today; focus on tests and keeping tool names/config keys aligned with VQL and env vars.

## Tests
```sh
. .venv/bin/activate
pytest
```
Note: lab API test is skipped automatically if `pyvelociraptor` or configs are missing.

## Codex MCP setup
You can register this server with the Codex CLI (stdio transport).

**Fast path (CLI):**
```sh
codex mcp add velociraptor \
  --env VELOCIRAPTOR_API_CONFIG=/absolute/path/to/velociraptor_lab/volumes/api/api.config.yaml \
  -- python3 main.py --config /absolute/path/to/velociraptor_lab/volumes/api/api.config.yaml \
  --log-level INFO --server-name velociraptor-mcp
```
- Run the command from the repo root or use absolute paths so Codex can find `main.py`.
- Restart Codex; inside the TUI `/mcp` shows active servers.

**Config file (manual):** add to `~/.codex/config.toml` if you prefer editing directly:
```toml
[mcp_servers.velociraptor]
command = "python3"
args = ["main.py", "--config", "/absolute/path/to/velociraptor_lab/volumes/api/api.config.yaml", "--log-level", "INFO", "--server-name", "velociraptor-mcp"]
env = { VELOCIRAPTOR_API_CONFIG = "/absolute/path/to/velociraptor_lab/volumes/api/api.config.yaml" }
cwd = "/absolute/path/to/velociraptor-mcp-server"
startup_timeout_sec = 15   # optional; defaults to 10
tool_timeout_sec = 120     # optional; defaults to 60
```
Either approach keeps your `api.config.yaml` path in one place via `VELOCIRAPTOR_API_CONFIG`. Codex shares the same MCP config between the CLI and IDE.

## Troubleshooting
- `ModuleNotFoundError` or `grpc` missing → ensure you’re in the venv and ran `pip install .`.
- `Velociraptor API config not found` → point `--config` / `VELOCIRAPTOR_API_CONFIG` to the generated `api.config.yaml`.
- Handshake fails in Codex → check `~/.codex/log/codex-tui.log`; most common is the missing config path.
- PEP 668 “externally managed” error → always use a venv (as above).

## Structure
```
mcp_server/       # server, tools, resources, prompts
main.py           # entrypoint
tests/            # unit tests + fixtures
requirements.txt  # shared deps (MCP + lab)
velociraptor_lab/ # podman/docker lab for local Velociraptor API
```

## Publishing to PyPI (manual)
```sh
. .venv/bin/activate
pip install -U pip build twine
python -m build
twine upload dist/*   # requires PYPI_USERNAME and PYPI_PASSWORD or token in ~/.pypirc
```
Tagging a release is recommended (see CI section).

## Releasing via GitHub Actions
If your CI publishes on tagged pushes, use these steps:
```sh
git add pyproject.toml mcp_server/__init__.py README.md
git commit -m "Release 0.1.5"        # or skip if already committed
git tag -a v0.1.5 -m "v0.1.5"        # bump tag version as needed
git push origin main                  # adjust branch name if different
git push origin v0.1.5                # triggers the release workflow
```
Trusted Publishing: the workflow uses PyPI OIDC; register `.github/workflows/ci.yml` as a Trusted Publisher in PyPI project settings. No `PYPI_API_TOKEN` is needed once linked.

## License
This project is licensed under the MIT License. See `LICENSE` for the full text.
