Metadata-Version: 2.4
Name: heapdump-stardiver-mcp
Version: 0.1.2
Summary: MCP server for HeapDumpStarDiver - JVM heap dump analysis with DuckDB
Author: Zac Policzer
License-Expression: MIT
Project-URL: Homepage, https://github.com/ZacAttack/HeapDumpStarDiver
Project-URL: Repository, https://github.com/ZacAttack/HeapDumpStarDiver
Project-URL: Issues, https://github.com/ZacAttack/HeapDumpStarDiver/issues
Keywords: mcp,jvm,heap-dump,hprof,parquet,duckdb,memory-analysis
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Software Development :: Debuggers
Classifier: Topic :: System :: Monitoring
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: mcp[cli]>=1.0.0
Requires-Dist: duckdb>=1.0.0
Dynamic: license-file

# HeapDumpStarDiver MCP Server

MCP (Model Context Protocol) server for JVM heap dump analysis. Converts HPROF heap dumps to Parquet files and provides DuckDB-powered SQL querying and automated memory waste detection — all accessible to any MCP-compatible AI agent.

## Install

```bash
pip install heapdump-stardiver-mcp
```

## Prerequisites

The `convert_heap_dump` tool requires the HeapDumpStarDiver Rust binary. Build it from source:

```bash
git clone https://github.com/ZacAttack/HeapDumpStarDiver.git
cd HeapDumpStarDiver
cargo build --release
```

Or set `HEAP_DUMP_STAR_DIVER_BINARY_OVERRIDE` to point at a pre-built binary.

If you only need to analyze existing Parquet files (via `open_session`), the Rust binary is not required.

## Agent Configuration

Add to your agent's MCP config:

```json
{
  "mcpServers": {
    "heapdump-stardiver": {
      "command": "heapdump-stardiver-mcp"
    }
  }
}
```

| Agent | Config file |
|-------|------------|
| Claude Code | `.mcp.json` in repo root |
| Claude Desktop | `~/Library/Application Support/Claude/claude_desktop_config.json` (macOS) |
| Cursor | `.cursor/mcp.json` in repo root |
| Kiro | `.kiro/settings/mcp.json` in repo root |

## Available Tools

| Tool | Description |
|------|-------------|
| `convert_heap_dump` | Convert HPROF → Parquet and open analysis session |
| `open_session` | Open session from existing Parquet files |
| `list_sessions` | Show all active sessions |
| `close_session` | Close DuckDB connection, keep files |
| `cleanup_session` | Close connection and delete Parquet files |
| `list_parquet_files` | List tables with row counts and schemas |
| `query_heap` | Run DuckDB SQL against Parquet (paginated) |
| `analyze_heap` | Automated waste detection and heap profiling |

## Typical Workflow

1. `convert_heap_dump(hprof_path="/path/to/dump.hprof")` — convert and open session
2. `list_parquet_files()` — discover available tables
3. `analyze_heap()` — automated waste detection (duplicate strings, bad collections, boxed primitives, etc.)
4. `query_heap(sql="SELECT ...")` — ad-hoc DuckDB queries
5. `close_session(id)` or `cleanup_session(id, confirm=True)`

## Waste Analysis

The `analyze_heap` tool detects common JVM memory waste patterns across 3 tiers:

- **Tier 1** (fast): Duplicate strings, empty/single-element collections, bad arrays, boxed primitives
- **Tier 2** (default): + collection sizing, duplicate byte arrays, class count, GC roots, DirectByteBuffers, thread stacks
- **Tier 3** (thorough): + duplicate object arrays, estimated shallow sizes

## License

MIT
