Metadata-Version: 2.4
Name: meeting-helper
Version: 1.7.9
Summary: AI meeting assistant for macOS — auto-record, live transcription, Claude-powered answers
Author: Victor
License: MIT
Keywords: meeting,transcription,deepgram,claude,overlay,macos
Requires-Python: >=3.11
Description-Content-Type: text/markdown
Requires-Dist: click>=8.0
Requires-Dist: anthropic>=0.40.0
Requires-Dist: sounddevice>=0.4.6
Requires-Dist: numpy>=1.24
Requires-Dist: deepgram-sdk<4.0,>=3.0
Requires-Dist: faster-whisper>=1.0.0
Requires-Dist: imageio-ffmpeg>=0.4.9
Requires-Dist: aiohttp>=3.9.0
Requires-Dist: pynput>=1.7.6
Requires-Dist: PySide6>=6.5.0
Requires-Dist: flet>=0.25.0
Requires-Dist: markdown>=3.5
Requires-Dist: pygments>=2.17
Requires-Dist: certifi>=2024.0
Requires-Dist: pyobjc-framework-Cocoa>=10.0; sys_platform == "darwin"
Requires-Dist: pyobjc-framework-AVFoundation>=10.0; sys_platform == "darwin"
Requires-Dist: pyobjc-framework-Quartz>=10.0; sys_platform == "darwin"
Requires-Dist: pyobjc-framework-ScreenCaptureKit>=10.0; sys_platform == "darwin"
Requires-Dist: pyobjc-framework-CoreMedia>=10.0; sys_platform == "darwin"
Requires-Dist: pyobjc-framework-libdispatch>=10.0; sys_platform == "darwin"
Provides-Extra: dev
Requires-Dist: pytest>=7.0; extra == "dev"
Requires-Dist: pytest-cov; extra == "dev"
Requires-Dist: ruff; extra == "dev"
Requires-Dist: mypy; extra == "dev"

# Meeting Helper

AI meeting assistant for macOS. Automatically detects when a meeting starts, records and transcribes in real-time, and lets you ask Claude questions about what's being discussed — using your codebase and knowledge base as context.

## Features

- **Auto-detection** — starts recording automatically when any app activates your microphone (Zoom, Meet, Teams, etc.)
- **Real-time transcription** — local Whisper (free, no API key needed) or Deepgram Nova-3 streaming
- **Invisible overlay** — native macOS window hidden from screen share via `NSWindow.setSharingType_(0)`
- **Ask Claude** — hotkey or type a question during the meeting; Claude answers using the transcript + your repos + knowledge docs
- **Per-meeting context** — select which repos and knowledge folders to load per meeting, so Claude only gets relevant context
- **Post-meeting** — full transcription + structured summary saved alongside the MP3 recording

## Requirements

- macOS 13+ (Ventura or later)
- Python 3.11+
- Microphone + Screen Recording + Accessibility permissions (prompted on first run)

---

## Installation

### Option A — pipx (recommended)

```bash
# Install pipx if you don't have it
brew install pipx && pipx ensurepath

# Install meeting-helper
pipx install meeting-helper
```

### Option B — from source

```bash
git clone <repo-url> meeting-helper
cd meeting-helper
python3 -m venv .venv
source .venv/bin/activate
pip install -e .
```

---

## First Run Setup

### 1. Download the Whisper model

Whisper is used for local transcription (no API key needed). The model needs to be downloaded once before your first meeting:

```bash
python3 -c "from faster_whisper import WhisperModel; WhisperModel('tiny', device='cpu', compute_type='int8')"
```

This downloads ~75MB to `~/.cache/huggingface/`. It only happens once.

> **Optional — better accuracy:** Use the `small` model (~244MB) instead of `tiny`:
> ```bash
> python3 -c "from faster_whisper import WhisperModel; WhisperModel('small', device='cpu', compute_type='int8')"
> ```
> Then in `meeting-helper setup`, set the Whisper model to `small`.

### 2. Run the setup wizard

```bash
meeting-helper setup
```

You'll be prompted for:

1. **Anthropic API key** — get one at [console.anthropic.com/settings/keys](https://console.anthropic.com/settings/keys)
2. **Deepgram API key** *(optional)* — get one at [console.deepgram.com](https://console.deepgram.com) for lower-latency streaming transcription. Leave blank to use local Whisper.
3. **Recordings directory** — where to save MP3s and transcripts (default: `~/meetings`)
4. **Repos** *(optional)* — paths to your code repos for Claude context
5. **Knowledge folder** *(optional)* — a folder of `.md`/`.txt` files organized by topic

### 3. Grant macOS permissions

On first run, macOS will prompt for:
- **Microphone** — to capture your voice
- **Screen Recording** — to capture system audio (meeting participants' voices)
- **Accessibility** — for global hotkeys (2×Cmd, 2×ESC)

After granting each permission, run `meeting-helper start` again if it stops.

---

## Usage

```bash
meeting-helper start    # start the background daemon
meeting-helper gui      # open the main window
meeting-helper stop     # stop the daemon
meeting-helper status   # show daemon and meeting status
meeting-helper logs     # tail the daemon log
meeting-helper version  # check for updates
```

Once the daemon is running:

1. **Start a Zoom/Meet/Teams call** → recording starts automatically
2. **Speak** → transcript appears in the overlay and GUI dashboard
3. **Ask Claude** → press `2×Cmd` (hotkey) or type in the "Ask Claude" box in the GUI
4. **End the call** → recording stops, MP3 + live transcript saved

### Hotkeys

| Combo | Action |
|---|---|
| `2× Cmd` | Trigger Claude query (uses last 5 transcript sentences) |
| `2× ESC` | Toggle overlay show/hide |
| `ESC + →` / `ESC + ←` | Scroll response down/up |
| `ESC + ]` / `ESC + [` | Navigate previous/next response |
| `Cmd + ↑` / `Cmd + ↓` | Font size up/down in overlay |

---

## Per-Meeting Context

By default, no repos or knowledge docs are loaded into Claude's context — loading everything would dilute accuracy. Instead, use the **Context Selector** in the Dashboard during a meeting:

1. Chips show your configured repos — click to select
2. Tree shows your knowledge base folders — check what's relevant
3. Click **Load Context** — Claude now has access to only what you selected
4. Ask questions — Claude answers using that specific context

---

## Post-Meeting

Open the **Recordings** screen in the GUI to:
- Play back the MP3
- Run full transcription (faster-whisper, timestamped)
- Generate a structured summary with Claude (key points, decisions, action items)

---

## Configuration

Config is saved to `~/.config/meeting-helper/config.json`. Run `meeting-helper setup` to reconfigure.

### Whisper model quality vs speed

| Model | Size | Speed | Accuracy |
|---|---|---|---|
| `tiny` | 75MB | Fastest | Good for real-time |
| `small` | 244MB | Fast | Better accuracy |
| `base` | 142MB | Medium | Balanced |

Default is `tiny` for real-time transcription. Post-meeting transcription always uses `small` for better quality.

---

## Check for Updates

```bash
meeting-helper version
```

Checks PyPI and upgrades automatically via pipx if a newer version is available.

---

## Troubleshooting

**Daemon not detecting meetings**
- Make sure Screen Recording permission is granted in System Settings → Privacy & Security
- Try `meeting-helper logs` to see what's happening

**No transcription appearing**
- If using Deepgram: check your API key with `meeting-helper status`
- If using local Whisper: ensure the model was downloaded (see First Run Setup above)
- Check `meeting-helper logs` for errors

**Overlay not visible**
- Press `2×ESC` to show/hide
- The overlay is intentionally invisible to screen share — check your physical display, not a recording

**Permission denied errors**
- Go to System Settings → Privacy & Security → Microphone / Screen Recording / Accessibility
- Remove and re-add the terminal app or meeting-helper entry
