Metadata-Version: 2.4
Name: audio-transcript-mcp
Version: 0.1.1
Summary: Real-time audio transcription MCP server for Claude Code
Project-URL: Homepage, https://github.com/llilakoblock/audio-transcript-mcp
Project-URL: Repository, https://github.com/llilakoblock/audio-transcript-mcp
Project-URL: Issues, https://github.com/llilakoblock/audio-transcript-mcp/issues
Author: llilakoblock
License-Expression: MIT
License-File: LICENSE
Keywords: audio,deepgram,mcp,transcription,whisper
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Multimedia :: Sound/Audio :: Speech
Requires-Python: >=3.10
Requires-Dist: faster-whisper>=1.0
Requires-Dist: mcp[cli]>=1.2
Requires-Dist: numpy>=1.24
Requires-Dist: pyaudiowpatch>=0.2.12
Requires-Dist: soxr>=0.3
Requires-Dist: websockets>=12.0
Description-Content-Type: text/markdown

# audio-transcript-mcp

Real-time audio transcription MCP server for Claude Code.

Captures **microphone + system audio** (WASAPI loopback on Windows) and transcribes via **Deepgram** (cloud) or **faster-whisper** (local, GPU/CPU).

## Features

- **Dual audio capture**: mic + system sound simultaneously
- **Two STT backends** switchable at runtime (Deepgram nova-3 / faster-whisper)
- **Stereo opus recording**: each session saves a stereo opus file (L=mic, R=system audio)
- **Per-session directories**: transcript + audio saved to `~/.audio-transcript-mcp/transcripts/<timestamp>/`
- Chunk overlap with text deduplication (no cut words at boundaries)
- Native float32 audio pipeline for whisper (no lossy int16 round-trip)
- High-quality stateful resampling via soxr (no boundary artifacts)
- Whisper hallucination filter (no_speech_prob + avg_logprob thresholds)
- Transcript buffer with time-based queries
- Auto-reconnect for Deepgram WebSocket
- GPU model unload/reload on stop/start (CUDA memory management)

## Architecture

```
┌─────────────┐     ┌──────────┐     ┌─────────────────┐
│  Mic (int16) ├────►│          │     │  STT Backend    │
│  WASAPI      │     │  Worker  ├────►│  whisper / DG   ├──► Transcript buffer
└─────────────┘     │  Thread  │     └─────────────────┘
                    │          ├────►┌─────────────────┐
┌─────────────┐     │          │     │ StereoOpusRec   │
│ System audio ├────►│          │     │ L=me R=others   ├──► audio.opus
│ Loopback f32 │     └──────────┘     └─────────────────┘
└─────────────┘

Audio pipeline: native capture → stereo→mono → soxr resample → backend/opus
```

Each audio source runs in its own worker thread. Audio is captured in the device's native format (float32 for loopback, int16 for mic), converted to mono, and routed to both the STT backend and the stereo opus recorder.

## Requirements

- Python 3.10+
- Windows (WASAPI loopback for system audio capture); mic-only on macOS/Linux
- NVIDIA GPU recommended for local whisper backend

## Installation

### From PyPI (recommended)

```bash
pip install audio-transcript-mcp
```

Or run without installing via `uvx`:

```bash
uvx audio-transcript-mcp
```

### From source

```bash
git clone https://github.com/llilakoblock/audio-transcript-mcp.git
cd audio-transcript-mcp
pip install -e .
```

## MCP Configuration

Add to your `mcp.json` (Claude Code settings):

### Using PyPI install

```json
{
  "audio-transcript": {
    "type": "stdio",
    "command": "audio-transcript-mcp",
    "env": {
      "STT_BACKEND": "local",
      "DEEPGRAM_API_KEY": "your-deepgram-api-key",
      "DEEPGRAM_LANGUAGE": "ru",
      "DEEPGRAM_MODEL": "nova-3",
      "DEEPGRAM_UTTERANCE_END_MS": "2500",
      "DEEPGRAM_ENDPOINTING": "500",
      "WHISPER_MODEL": "large-v3",
      "WHISPER_DEVICE": "cuda",
      "WHISPER_LANGUAGE": "ru",
      "WHISPER_CHUNK_SEC": "10",
      "WHISPER_OVERLAP_SEC": "2",
      "TRANSCRIPT_MAX_AGE": "3600"
    }
  }
}
```

### Using uvx (no install needed)

```json
{
  "audio-transcript": {
    "type": "stdio",
    "command": "uvx",
    "args": ["audio-transcript-mcp"],
    "env": {
      "STT_BACKEND": "deepgram",
      "DEEPGRAM_API_KEY": "your-deepgram-api-key"
    }
  }
}
```

> **Note:** System audio capture (loopback) uses WASAPI and is Windows-only. On macOS/Linux only microphone input works out of the box.

## Environment Variables

All configuration is done via environment variables in the `env` block of your MCP config.

### General

| Variable | Default | Description |
|---|---|---|
| `STT_BACKEND` | `deepgram` | Which speech-to-text engine to use. `"deepgram"` for cloud (fast, needs API key) or `"local"` for offline faster-whisper (needs GPU). Switchable at runtime via `set_backend` tool. |
| `TRANSCRIPT_MAX_AGE` | `3600` | How long (seconds) to keep transcript entries in the in-memory buffer. Older entries are pruned automatically. |
| `TRANSCRIPT_DIR` | `~/.audio-transcript-mcp/transcripts` | Root directory for session output. Each session creates a timestamped subdirectory with `transcript.txt` and `audio.opus`. |

### Deepgram (cloud STT)

Used when `STT_BACKEND=deepgram`. Streams audio over WebSocket, results in real-time.

| Variable | Default | Description |
|---|---|---|
| `DEEPGRAM_API_KEY` | — | **Required.** Get one at [console.deepgram.com](https://console.deepgram.com). |
| `DEEPGRAM_LANGUAGE` | `ru` | Language code. Use `"multi"` for automatic multi-language detection (requires nova-3). |
| `DEEPGRAM_MODEL` | `nova-3` | Deepgram model. `nova-3` is latest and supports `"multi"` language. `nova-2` is older but cheaper. |
| `DEEPGRAM_UTTERANCE_END_MS` | `2500` | How long to wait (ms) after speech ends before finalizing the utterance. Higher = fewer splits in long pauses. Requires `interim_results=true` (set automatically). |
| `DEEPGRAM_ENDPOINTING` | `500` | Endpointing sensitivity in ms. Lower = faster response but may split mid-sentence. Higher = waits longer before deciding speech ended. |

### Whisper (local STT)

Used when `STT_BACKEND=local`. Runs faster-whisper on your GPU/CPU, fully offline.

| Variable | Default | Description |
|---|---|---|
| `WHISPER_MODEL` | `large-v3` | Model size. Options: `tiny`, `base`, `small`, `medium`, `large-v3`. Larger = more accurate but slower and more VRAM. `large-v3` needs ~4GB VRAM. |
| `WHISPER_DEVICE` | `cuda` | `"cuda"` for NVIDIA GPU (recommended) or `"cpu"` (much slower). |
| `WHISPER_LANGUAGE` | — | Language hint (e.g. `"ru"`, `"en"`). Empty = auto-detect. Setting a language improves accuracy and speed. |
| `WHISPER_CHUNK_SEC` | `5` | Audio chunk duration in seconds sent to whisper for transcription. Longer chunks = more context but higher latency. |
| `WHISPER_OVERLAP_SEC` | `2` | Overlap between consecutive chunks. Prevents words from being cut at chunk boundaries. Text deduplication removes repeated words automatically. |

### Full example

```json
{
  "audio-transcript": {
    "type": "stdio",
    "command": "audio-transcript-mcp",
    "env": {
      "STT_BACKEND": "local",
      "DEEPGRAM_API_KEY": "your-deepgram-api-key",
      "DEEPGRAM_LANGUAGE": "multi",
      "DEEPGRAM_MODEL": "nova-3",
      "DEEPGRAM_UTTERANCE_END_MS": "2500",
      "DEEPGRAM_ENDPOINTING": "500",
      "WHISPER_MODEL": "large-v3",
      "WHISPER_DEVICE": "cuda",
      "WHISPER_LANGUAGE": "ru",
      "WHISPER_CHUNK_SEC": "15",
      "WHISPER_OVERLAP_SEC": "3",
      "TRANSCRIPT_MAX_AGE": "3600",
      "TRANSCRIPT_DIR": "C:/Users/you/.audio-transcript-mcp/transcripts"
    }
  }
}
```

> You only need to set the variables for the backend you're using. Deepgram vars are ignored when `STT_BACKEND=local` and vice versa.

## Session Output

Each recording session creates a timestamped directory:

```
~/.audio-transcript-mcp/transcripts/
  2026-03-06_23-24-48/
    transcript.txt    # Plain text transcript
    audio.opus        # Stereo opus (L=mic, R=system)
    debug.log         # Whisper debug data (local backend only)
```

The transcript is plain text:
```
[23:24:50] me — Hello, can you hear me?

[23:24:52] others — Yes, I can hear you fine.

[23:24:55] system — [STARTED: Microphone, 44100Hz, 2ch]
```

## MCP Tools

| Tool | Description |
|---|---|
| `start_listening` | Start capturing mic + system audio and transcribing |
| `stop_listening` | Stop capture, save transcript and opus recording |
| `is_listening` | Check if capture is active |
| `get_transcript` | Get transcript for the last N seconds (default 60) |
| `get_full_transcript` | Get entire transcript buffer |
| `get_transcript_since` | Get transcript since a Unix timestamp |
| `clear_transcript` | Clear the transcript buffer |
| `get_backend` | Show current STT backend |
| `set_backend` | Switch backend (`"deepgram"` / `"local"`) at runtime |

## Project Structure

```
src/audio_transcript_mcp/
  __init__.py            # Package version
  __main__.py            # python -m entry point
  server.py              # MCP tools (thin wrapper)
  engine.py              # AudioEngine, Buffer, config
  audio_utils.py         # Format conversion (float32↔int16, stereo→mono)
  backends/
    __init__.py          # Backend factory
    whisper.py           # Local faster-whisper STT
    deepgram.py          # Deepgram WebSocket STT
  recorder/
    __init__.py
    opus.py              # StereoOpusRecorder (PyOgg)
```

## Releasing

Releases are automated via GitHub Actions:

```bash
# Update version in src/audio_transcript_mcp/__init__.py
git tag v0.1.0
git push origin v0.1.0
# CI automatically builds, publishes to PyPI, and creates a GitHub Release
```

## License

MIT
