Metadata-Version: 2.3
Name: sttpy
Version: 0.1.0
Summary: Dead simple speech-to-text
Requires-Python: <3.12,>=3.9
Requires-Dist: click>=8.1.7
Requires-Dist: keyboard>=0.13.5
Requires-Dist: langchain-community>=0.3.12
Requires-Dist: langchain>=0.3.12
Requires-Dist: openai-whisper>=20240930
Requires-Dist: pygetwindow>=0.0.9
Requires-Dist: pyperclip>=1.9.0
Requires-Dist: pytest>=8.3.4
Requires-Dist: sounddevice>=0.5.1
Requires-Dist: torch>=2.5.1
Description-Content-Type: text/markdown

# Quickstart

This project is a simple voice dictation (speech-to-text) tool that runs completely on device. It uses the openai-whisper models for speech recognition and optionally uses the local LLMs for post-processing of transcribed text (currently supporting all ollama models).

Simply install, run and hold the hotkey to speak. The transcribed text will be pasted into the active window. Say 'help' to view voice commands.

## Installation

```sh
pip install --extra-index-url https://download.pytorch.org/whl/cu124
```

## Usage

```sh
Usage: stt [OPTIONS]

  Voice dictation (speech-to-text) completely on device.

Options:
  --stt TEXT         Whisper model name (tiny.en, base.en, turbo, ...)
  --hotkey TEXT      Hotkey to hold while speaking
  --debug            Enable debug mode
  --post-processing  Enable LLM post-processing of transcribed text
  --type-mode        Use keystrokes instead of pasting text
  --help             Show this message and exit.
```

## Examples

Run with debug mode enabled, using the openai-whisper turbo model, and the hold-to-speak hotkey `f12`:
```sh
stt --debug --stt turbo --hotkey f12
```

Prompt for a hotkey to hold while speaking:

```sh
stt --hotkey prompt
Enter the hotkey you want to use followed by 'escape':
Hotkey: space. Press escape to confirm.
Hotkey: ctrl+space. Press escape to confirm.
Hotkey confirmed: ctrl+space
Hotkey: ctrl+space
2025-01-16 22:29:00 - INFO - Loading whisper model 'tiny.en' on cuda...
2025-01-16 22:29:00 - INFO - Press and hold 'ctrl+space' to speak
```

## Commands

There are a few commands built-in to the voice dictation interface:

Just hold the hotkey and say 'help' to view commands.

## Post-processing

Note: Post-processing is not enabled by default since there is latency and its still under development. To enable post-processing, use the `--post-processing` flag. You will need to have an local ollama server running and the model `llama3.2:3b-instruct-q5_K_M` available. 