Metadata-Version: 2.4
Name: ai-video-pipeline-mcp
Version: 0.2.0
Summary: AI Video Pipeline MCP Server
Requires-Python: >=3.12
Description-Content-Type: text/markdown
Requires-Dist: mcp[cli]>=1.26.0
Requires-Dist: pydantic>=2.0.0

# AI Video Pipeline MCP Server

An MCP server that powers an AI video production pipeline for **algorithm-explanation YouTube videos**, designed to run alongside **Claude Desktop**.

## How It Works

**Claude Desktop is the AI.** This server provides structured tools and prompt templates — no external AI API calls, no API keys required.

When you call a tool (e.g. `generate_script`), it:
1. Validates your inputs
2. Assembles a rich, domain-specific prompt using carefully crafted templates
3. Returns a **prompt packet** to Claude Desktop

Claude reads the packet and generates the output (script, scene list, image prompt, Python code) directly in the conversation.

## Pipeline

```
1. generate_script        →  Full narration script with visual cues
2. [record voiceover]     →  Human step — export timestamped transcript
3. plan_scenes            →  Visual scene list with timestamps
4. generate_image_prompt  →  Whisk/Midjourney prompt per image scene
5. generate_video_compositor → Python script that assembles the final video
```

## Tools

| Tool | Description |
|------|-------------|
| `generate_script` | Write a complete audio-first narration script |
| `plan_scenes` | Convert a timestamped transcript into a scene storyboard |
| `generate_image_prompt` | Generate a Whisk/AI image prompt for a scene |
| `generate_video_compositor` | Generate a Python video compositor script |

## Installation & Claude Desktop Setup

### Prerequisites
- Python 3.12+
- [uv](https://github.com/astral-sh/uv) package manager
- [Claude Desktop](https://claude.ai/download)

### Install

```bash
# Clone the repo
git clone https://github.com/YOUR_USERNAME/mcp_deployment.git
cd mcp_deployment

# Install dependencies
uv sync
```

### Configure Claude Desktop

Add this to your `claude_desktop_config.json` (usually at `%APPDATA%\Claude\claude_desktop_config.json` on Windows):

```json
{
  "mcpServers": {
    "ai-video-pipeline": {
      "command": "uv",
      "args": [
        "--directory",
        "C:/Proj/DD/mcp_deployment",
        "run",
        "ai-video-mcp"
      ]
    }
  }
}
```

> **Note:** Change the `--directory` path to wherever you cloned the repo.

Restart Claude Desktop. The 4 tools will appear in the tool list.

## Testing Without Claude Desktop

Use the MCP Inspector to test tools interactively:

```bash
mcp dev src/mcpserver/server.py
```

Open [http://localhost:5173](http://localhost:5173), select a tool, fill in the inputs, and click **Run**.

---

## Re-publishing (GitHub)

To update the server after making changes:

### 1. Bump the version

In `pyproject.toml`, increment `version`:
```toml
version = "0.2.1"  # or 0.3.0 for bigger changes
```

### 2. Build

```bash
uv build
```

This produces `dist/ai_video_pipeline_mcp-X.Y.Z-py3-none-any.whl`.

### 3. Commit and push

```bash
git add -A
git commit -m "chore: bump to vX.Y.Z — describe your changes"
git push origin main
```

### 4. Update Claude Desktop config (if path changed)

Restart Claude Desktop to pick up the new version. Since you're running from source via `uv run`, this happens automatically — no reinstall needed.

### Optional: Install from GitHub directly

If you want to install the server as a package from GitHub (instead of running from source):

```bash
uv pip install git+https://github.com/YOUR_USERNAME/mcp_deployment.git
```

Then use `ai-video-mcp` as the command in `claude_desktop_config.json` (no `--directory` needed).

---

## Project Structure

```
src/mcpserver/
├── server.py                   # FastMCP server — registers all tools
├── __main__.py                 # Entry point (ai-video-mcp command)
├── tools/
│   ├── script_generator.py     # Tool 1: generate_script
│   ├── scene_planner.py        # Tool 2: plan_scenes
│   ├── image_prompt_generator.py  # Tool 3: generate_image_prompt
│   └── animation_generator.py  # Tool 4: generate_video_compositor
└── utils/
    └── schemas.py              # Pydantic input schemas
```
