Metadata-Version: 2.4
Name: renderdeck-mcp
Version: 0.9.1
Summary: MCP server for generating PowerPoint presentations via Gemini
Project-URL: Homepage, https://github.com/exjch/renderdeck
Project-URL: Repository, https://github.com/exjch/renderdeck
Project-URL: Issues, https://github.com/exjch/renderdeck/issues
Project-URL: Changelog, https://github.com/exjch/renderdeck/blob/main/CHANGELOG.md
Author: Francis Rosario
License-Expression: MIT
Keywords: ai,gemini,llm,mcp,powerpoint,pptx,presentations,slides
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Multimedia :: Graphics :: Presentation
Classifier: Topic :: Office/Business
Requires-Python: >=3.12
Requires-Dist: cryptography>=42.0
Requires-Dist: edge-tts>=6.1
Requires-Dist: google-genai>=1.0
Requires-Dist: mcp[cli]>=1.0
Requires-Dist: pillow>=10.0
Requires-Dist: python-pptx>=1.0
Provides-Extra: dev
Requires-Dist: mypy>=1.10; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.23; extra == 'dev'
Requires-Dist: pytest-cov>=5.0; extra == 'dev'
Requires-Dist: pytest>=8.0; extra == 'dev'
Requires-Dist: ruff>=0.4; extra == 'dev'
Description-Content-Type: text/markdown

# RenderDeck

**AI-powered presentation generation as an MCP server.** Describe the presentation you want in conversation with your LLM, and RenderDeck generates a `.pptx` with Gemini-created slide images.

> **Quick start:** After installing, tell your LLM *"How do I use RenderDeck?"* to get a detailed explanation of all tools, workflows, and example prompts.

## Getting Started

### 1. Install

```bash
pip install renderdeck-mcp
# or run directly with uvx (no install needed):
uvx renderdeck-mcp serve
```

### 2. Configure your LLM client

Add RenderDeck as an MCP server in your client's config.

**Claude Desktop** — add to `~/Library/Application Support/Claude/claude_desktop_config.json`:

```json
{
  "mcpServers": {
    "renderdeck": {
      "command": "uvx",
      "args": ["renderdeck-mcp", "serve"],
      "env": {
        "GOOGLE_GEMINI_API_KEY": "your-api-key-here"
      }
    }
  }
}
```

**Claude Code** — add to `.claude/settings.json` or `~/.claude/settings.json`:

```json
{
  "mcpServers": {
    "renderdeck": {
      "command": "uvx",
      "args": ["renderdeck-mcp", "serve"],
      "env": {
        "GOOGLE_GEMINI_API_KEY": "your-api-key-here"
      }
    }
  }
}
```

**Cursor** — add to `.cursor/mcp.json`:

```json
{
  "mcpServers": {
    "renderdeck": {
      "command": "uvx",
      "args": ["renderdeck-mcp", "serve"],
      "env": {
        "GOOGLE_GEMINI_API_KEY": "your-api-key-here"
      }
    }
  }
}
```

### 3. Set up your API key

Ask your LLM to run the `setup_gemini_key` tool with your [Gemini API key](https://aistudio.google.com/apikey), or set `GOOGLE_GEMINI_API_KEY` in the server's environment.

### 4. Develop your storyline

Tell your LLM what presentation you need — the topic, audience, tone, and any specific points to cover. The LLM reads the `renderdeck://reference` resource to learn the slide prompt format, then collaborates with you to develop:

- **Storyline and structure** — how many slides, what each one covers, the narrative arc
- **Visual direction** — style, color palette, imagery for each slide
- **Speaker notes** — what to say on each slide, with timing guidance

You iterate in conversation until the storyline feels right. The LLM then writes the slide prompt file — a structured markdown document — so you never need to learn the format yourself.

### 5. Validate and generate

The LLM calls `validate_prompt_file` to catch any formatting issues, then `generate_deck` to produce a `.pptx` with Gemini-generated slide images.

### 6. Iterate

Review the generated deck. Ask the LLM to tweak specific slides — adjust the visual prompt, reword speaker notes, or restructure the flow. Use `regenerate_slide` to re-render individual slides without rebuilding the entire deck.

## Example Prompts

Just tell your LLM what you need — no need to learn the prompt file format:

> **"Create a 10-slide deck on our Q1 product roadmap for the engineering all-hands. Audience: 50 engineers. Tone: confident but honest about delays. Include a timeline slide and a risks slide."**

The LLM reads the reference, drafts a prompt file with 10 slides, validates it, and calls `generate_deck`.

> **"I have a blog post about our new API. Turn it into a 7-slide technical presentation for DevRel to present at a meetup. Use dark theme with code snippets on slides."**

The LLM distills the blog post into key points, structures them as slides, and generates visuals matching the dark theme.

> **"Here is our investor update email [paste text]. Create a 12-slide board deck. Use notes_mode narration so I can feed it into Pictory for a video version."**

The LLM extracts metrics and narrative from the email, builds slides with `**Say:**` notes for narration, and generates with `notes_mode="narration"`.

> **"Generate the deck, then convert it to a narrated video using the 'roger' voice."**

The LLM generates slides, then calls `generate_video` to compose a narrated MP4 with the Roger voice.

> **"Show me all available voices — I want something energetic and male."**

The LLM calls `list_voices` and highlights male voices, noting style descriptors to help you choose.

> **"Create a 5-slide deck about our new API launch, then convert it to video with the 'guy' voice."**

The LLM drafts a prompt file, generates the deck, and immediately calls `generate_video` with `voice="guy"` to produce a narrated MP4.

## Getting the Best Results

For best results, don't jump straight to slide generation. Use the LLM to develop the raw material first:

1. **Brainstorm** — Tell the LLM your topic, audience, and goal. Ask it to outline key messages and a narrative arc.
   > "I need a presentation about [topic] for [audience]. The goal is [goal]. Help me identify the 5-7 key messages and suggest a narrative arc."
2. **Structure** — Ask the LLM to organise the outline into a slide sequence with one idea per slide and clear transitions.
   > "Organise these messages into a slide-by-slide outline. One idea per slide, with clear transitions. Suggest a title for each slide."
3. **Enrich** — Paste in any source material (docs, emails, data, meeting notes). Ask the LLM to distill the most compelling points for your audience.
   > "Here is my source material: [paste docs/emails/data]. Distill the most compelling points for my audience and weave them into the outline."
4. **Generate** — Once the storyline is solid, say "now generate the deck". The LLM writes the prompt file and calls the tools.
   > "The outline looks good. Now generate the deck using RenderDeck."
5. **Iterate** — Review the `.pptx`, then ask the LLM to tweak individual slides, adjust visuals, or reword notes.
   > "Slide 4 needs a stronger visual — make it more data-driven. And reword the speaker notes on slide 7 to be more conversational."

> **Pro tip:** The more context you give the LLM up front (audience, tone, key data points, what to emphasise, what to skip), the better the first draft. Treat the LLM as a presentation coach, not just a generator.

## MCP Tools

| Tool | Description |
|------|-------------|
| `generate_deck` | Full pipeline: parse, generate images, assemble .pptx |
| `validate_prompt_file` | Validate a prompt file (dry-run, no API key needed) |
| `generate_slide` | Generate a single slide image and cache it (avoids timeouts) |
| `assemble_deck` | Assemble .pptx from cached slide images |
| `regenerate_slide` | Regenerate specific slides and rebuild the .pptx |
| `setup_gemini_key` | Store Gemini API key securely (Fernet-encrypted) |
| `get_setup_status` | Check configuration readiness |
| `update_settings` | Update model, aspect ratio, and other settings |
| `get_version` | Return the server version |
| `generate_video` | Convert a presentation to a narrated MP4 video |
| `list_voices` | List available TTS voices for video narration (see `renderdeck://voice-samples` resource) |
| `get_help` | Show full help document with tools, prompts, and examples |

## MCP Resources

| Resource | Description |
|----------|-------------|
| `renderdeck://reference` | Complete reference document for writing slide prompt files |
| `renderdeck://voice-samples` | Interactive page listing all supported TTS voices with audio previews |

## MCP Prompts

| Prompt | Description |
|--------|-------------|
| `create_deck` | Guided workflow: read reference, write prompt, validate, generate |
| `check_and_fix` | Validate a prompt file and get error-by-error fix suggestions |

## Security & Privacy

RenderDeck runs entirely on your local machine as an MCP server process. Your slide prompt files and generated images stay on disk — the only external communication is with the Google Gemini API, which receives the text prompt for each slide and returns a generated image. No data is sent to any other service. Your Gemini API key is stored locally with Fernet encryption.

## Tech Stack

| Component | Technology |
|-----------|------------|
| Language | Python 3.12+ |
| Protocol | [Model Context Protocol (MCP)](https://modelcontextprotocol.io/) via FastMCP |
| Image generation | Google Gemini API (`gemini-2.5-flash-image`, `gemini-3-pro-image-preview`) |
| Presentation assembly | python-pptx |
| Image processing | Pillow |
| Auth | Google Gemini API key (Fernet-encrypted local storage) |

## Cost Guidance

- **Nano Banana** (`gemini-2.5-flash-image`): ~$0.039 per slide image
- **Nano Banana Pro** (`gemini-3-pro-image-preview`): ~$0.12 per slide image
- Cached images are reused automatically when the prompt hasn't changed

## Changelog

See [CHANGELOG.md](https://github.com/exjch/renderdeck/blob/main/CHANGELOG.md) for release notes.
