Metadata-Version: 2.4
Name: mcp-server-kaggle-exec
Version: 0.1.0
Summary: MCP server for executing Python code on Kaggle GPU runtimes (T4/P100/TPU)
Author: Paritosh Dwivedi
License-Expression: MIT
License-File: LICENSE
Keywords: cuda,gpu,kaggle,machine-learning,mcp
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Requires-Dist: kaggle
Requires-Dist: mcp[cli]>=1.6.0
Description-Content-Type: text/markdown

# mcp-server-kaggle-exec

<!-- mcp-name: io.github.pdwi2020/mcp-server-kaggle-exec -->

MCP server that executes Python code on Kaggle GPU runtimes (T4 x2, P100, TPU) from any MCP-compatible AI assistant — Claude Code, Claude Desktop, Gemini CLI, Cline, and others. Run GPU-accelerated code (CUDA, PyTorch, TensorFlow) without local GPU hardware using Kaggle's free 30hr/week GPU quota.

## Prerequisites

- Python 3.10+
- A [Kaggle](https://www.kaggle.com) account
- Kaggle API credentials: either `KAGGLE_API_TOKEN` env var (KGAT_* token) or `~/.kaggle/kaggle.json` (see [Authentication](#authentication))

## Installation

```bash
pip install mcp-server-kaggle-exec
```

Or run directly with `uvx`:

```bash
uvx mcp-server-kaggle-exec
```

## Configuration

### Claude Code

Add to your project's `.mcp.json` or `~/.claude/.mcp.json`:

```json
{
  "mcpServers": {
    "kaggle-exec": {
      "command": "mcp-server-kaggle-exec"
    }
  }
}
```

Or via the CLI:

```bash
claude mcp add kaggle-exec mcp-server-kaggle-exec
```

### Claude Desktop

Add to `claude_desktop_config.json`:

```json
{
  "mcpServers": {
    "kaggle-exec": {
      "command": "mcp-server-kaggle-exec"
    }
  }
}
```

### Gemini CLI

```bash
gemini mcp add kaggle-exec -- mcp-server-kaggle-exec
```

## Tools

### `kaggle_execute`

Execute inline Python code on a Kaggle GPU kernel.

| Parameter    | Type   | Default | Description                                   |
|--------------|--------|---------|-----------------------------------------------|
| `code`       | string | —       | Python code to execute (required)             |
| `enable_gpu` | bool   | `true`  | Whether to request GPU acceleration           |
| `timeout`    | int    | `600`   | Max wait time in seconds                      |

Returns JSON with stdout, stderr, status, output_files, and execution_time.

### `kaggle_execute_file`

Execute a local `.py` file on a Kaggle GPU kernel.

| Parameter    | Type   | Default | Description                                   |
|--------------|--------|---------|-----------------------------------------------|
| `file_path`  | string | —       | Path to a local `.py` file (required)         |
| `enable_gpu` | bool   | `true`  | Whether to request GPU acceleration           |
| `timeout`    | int    | `600`   | Max wait time in seconds                      |

### `kaggle_execute_notebook`

Execute code and download all generated output files (images, models, CSVs, etc.).

| Parameter    | Type   | Default | Description                                   |
|--------------|--------|---------|-----------------------------------------------|
| `code`       | string | —       | Python code to execute (required)             |
| `output_dir` | string | —       | Local directory for downloaded artifacts (required) |
| `enable_gpu` | bool   | `true`  | Whether to request GPU acceleration           |
| `timeout`    | int    | `600`   | Max wait time in seconds                      |

Output files are downloaded to `output_dir`. To save files for download, write them to the current directory in your Kaggle code.

## Examples

**Check GPU availability:**
```
kaggle_execute(code="import torch; print(torch.cuda.is_available()); print(torch.cuda.get_device_name(0))")
```

**Run nvidia-smi:**
```
kaggle_execute(code="import subprocess; print(subprocess.run(['nvidia-smi'], capture_output=True, text=True).stdout)")
```

**Train a model and download weights:**
```
kaggle_execute_notebook(
    code="import torch; model = torch.nn.Linear(10, 1); torch.save(model.state_dict(), 'model.pt')",
    output_dir="./outputs"
)
```

**CPU-only execution (faster startup):**
```
kaggle_execute(code="print('Hello from Kaggle!')", enable_gpu=False)
```

## How It Works

Unlike Google Colab (which uses real-time WebSocket execution), Kaggle uses **batch execution**:

1. Your code is pushed as a private Kaggle kernel
2. Kaggle queues and runs the kernel (30-120s startup + execution time)
3. Once complete, the output log and files are downloaded
4. The kernel is cleaned up (left as private)

This means there's no streaming output — you get results only after execution completes.

## Authentication

Two authentication methods are supported:

### Option 1: KGAT_* Access Token (recommended)

Kaggle API v2 access tokens (`KGAT_*` format) work via environment variable:

```bash
export KAGGLE_API_TOKEN=KGAT_your_token_here
```

To get a token: go to [kaggle.com/settings](https://www.kaggle.com/settings) → **API** → **Create New Access Token**.

When using with MCP, pass the env var in your server config:

```json
{
  "mcpServers": {
    "kaggle-exec": {
      "command": "mcp-server-kaggle-exec",
      "env": {
        "KAGGLE_API_TOKEN": "KGAT_your_token_here"
      }
    }
  }
}
```

### Option 2: Legacy kaggle.json

Place your Kaggle API key at `~/.kaggle/kaggle.json`:

1. Go to [kaggle.com/settings](https://www.kaggle.com/settings)
2. Scroll to **API** section
3. Click **Create New Token**
4. Move the downloaded `kaggle.json` to `~/.kaggle/kaggle.json`
5. Set permissions: `chmod 600 ~/.kaggle/kaggle.json`

## GPU Quota

Kaggle provides ~30 hours of free GPU per week. The API supports `enable_gpu: true/false` but does not allow selecting specific GPU types (T4 vs P100) — Kaggle assigns the GPU automatically.

For CPU-only tasks, set `enable_gpu=False` to avoid consuming GPU quota.

## Troubleshooting

**"Kaggle authentication failed"** — Ensure either `KAGGLE_API_TOKEN` env var is set (KGAT_* token) or `~/.kaggle/kaggle.json` exists. See [Authentication](#authentication) above.

**"Kernel timed out"** — Increase the `timeout` parameter. Kaggle kernel startup can take 30-120 seconds, plus execution time.

**"GPU quota exceeded"** — You've used your ~30hr weekly GPU quota. Wait for the weekly reset or use `enable_gpu=False` for CPU-only execution.

**"Kernel status: error"** — Check the stderr in the response for Python errors in your code.

## Comparison with mcp-server-colab-exec

| Aspect | colab-exec | kaggle-exec |
|--------|-----------|-------------|
| Execution model | Real-time (WebSocket) | Batch (push + poll) |
| Startup time | ~10-30s | ~30-120s |
| Auth | Google OAuth2 (browser) | API token file |
| GPU types | T4, L4 | T4 x2, P100, TPU |
| GPU selection | Can pick T4/L4 | API only supports on/off |
| Free quota | Usage-based | ~30 hr/week |
| Output | Streaming | After completion |

## License

MIT
