Metadata-Version: 2.4
Name: speech-to-speech
Version: 0.2.4
Summary: Low-latency speech-to-speech pipeline
Author: Hugging Face
License-Expression: Apache-2.0
Project-URL: Homepage, https://github.com/huggingface/speech-to-speech
Project-URL: Repository, https://github.com/huggingface/speech-to-speech
Project-URL: Issues, https://github.com/huggingface/speech-to-speech/issues
Keywords: speech-to-speech,voice-agents,speech-recognition,text-to-speech,openai
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Multimedia :: Sound/Audio :: Speech
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: fastapi>=0.115.0
Requires-Dist: httpx>=0.28.0
Requires-Dist: nltk==3.9.1
Requires-Dist: numpy<2.4.4,>=1.26.0; platform_system == "Darwin"
Requires-Dist: numpy>=1.26.0; platform_system != "Darwin"
Requires-Dist: openai==2.28.0
Requires-Dist: pillow>=10.0.0
Requires-Dist: pydantic>=2.0
Requires-Dist: rich>=13.0
Requires-Dist: scipy>=1.10.0
Requires-Dist: sounddevice==0.5.3; platform_system == "Darwin"
Requires-Dist: sounddevice>=0.5.0; platform_system != "Darwin"
Requires-Dist: soundfile>=0.13.0; platform_system == "Darwin"
Requires-Dist: torch==2.11.0; platform_system == "Darwin"
Requires-Dist: torch>=2.4.0; platform_system != "Darwin"
Requires-Dist: torchaudio==2.11.0; platform_system == "Darwin"
Requires-Dist: torchaudio>=2.4.0; platform_system != "Darwin"
Requires-Dist: transformers==5.6.2; platform_system == "Darwin"
Requires-Dist: transformers>=4.57.0; platform_system != "Darwin"
Requires-Dist: uvicorn>=0.30.0
Requires-Dist: websockets>=12.0
Requires-Dist: nano-parakeet>=0.2.0; platform_system != "Darwin"
Requires-Dist: faster-qwen3-tts>=0.2.6; platform_system != "Darwin"
Requires-Dist: miniaudio==1.61; platform_system == "Darwin"
Requires-Dist: mlx==0.31.1; platform_system == "Darwin"
Requires-Dist: mlx-audio==0.4.2; platform_system == "Darwin"
Requires-Dist: mlx-lm==0.31.1; platform_system == "Darwin"
Requires-Dist: mlx-metal==0.31.1; platform_system == "Darwin"
Requires-Dist: misaki>=0.9.4; platform_system == "Darwin"
Requires-Dist: espeakng-loader>=0.2.4; platform_system == "Darwin"
Requires-Dist: spacy>=3.8.4; platform_system == "Darwin"
Requires-Dist: phonemizer-fork>=3.3.2; platform_system == "Darwin"
Provides-Extra: chattts
Requires-Dist: ChatTTS>=0.1.1; extra == "chattts"
Provides-Extra: facebook-mms
Requires-Dist: transformers>=4.57.0; extra == "facebook-mms"
Provides-Extra: faster-whisper
Requires-Dist: faster-whisper>=1.0.3; extra == "faster-whisper"
Provides-Extra: kokoro
Requires-Dist: kokoro>=0.9.2; platform_system != "Darwin" and extra == "kokoro"
Provides-Extra: language-detection
Requires-Dist: lingua-language-detector>=2.0.2; extra == "language-detection"
Provides-Extra: mlx-lm
Requires-Dist: mlx-lm==0.31.1; platform_system == "Darwin" and extra == "mlx-lm"
Requires-Dist: mlx-vlm==0.4.1; platform_system == "Darwin" and extra == "mlx-lm"
Provides-Extra: paraformer
Requires-Dist: funasr>=1.1.6; extra == "paraformer"
Requires-Dist: modelscope>=1.17.1; extra == "paraformer"
Requires-Dist: onnxruntime<1.24; python_version < "3.11" and extra == "paraformer"
Provides-Extra: pocket
Requires-Dist: pocket-tts>=0.1.0; extra == "pocket"
Provides-Extra: websocket
Requires-Dist: websockets>=12.0; extra == "websocket"
Provides-Extra: whisper-mlx
Requires-Dist: lightning-whisper-mlx>=0.0.10; platform_system == "Darwin" and extra == "whisper-mlx"
Dynamic: license-file

<div align="center">
  <div>&nbsp;</div>
  <img src="https://raw.githubusercontent.com/huggingface/speech-to-speech/main/logo.png" width="600"/>
</div>

# Speech To Speech: Build local voice agents with open-source models

## 📖 Quick Index
* [Approach](#approach)
  - [Structure](#structure)
  - [Modularity](#modularity)
* [Setup](#setup)
* [Usage](#usage)
  - [Realtime approach](#realtime-approach)
  - [Server/Client approach](#serverclient-approach)
  - [WebSocket approach](#websocket-approach)
  - [Local approach](#local-approach-running-on-mac)
  - [Docker Server approach](#docker-server)
* [Command-line usage](#command-line-usage)
  - [Model parameters](#model-parameters)
  - [Generation parameters](#generation-parameters)
  - [Notable parameters](#notable-parameters)

## Approach

### Structure
This repository implements a speech-to-speech cascaded pipeline consisting of the following parts:
1. **Voice Activity Detection (VAD)**
2. **Speech to Text (STT)**
3. **Language Model (LM)**
4. **Text to Speech (TTS)**

### Modularity
The pipeline provides a fully open and modular approach, with a focus on leveraging models available through the Transformers library on the Hugging Face hub. The code is designed for easy modification, and we already support device-specific and external library implementations:

**VAD** 
- [Silero VAD v5](https://github.com/snakers4/silero-vad)

**STT**
- Any [Whisper](https://huggingface.co/docs/transformers/en/model_doc/whisper) model checkpoint on the Hugging Face Hub through Transformers 🤗, including [whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) and [distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3)
- [Lightning Whisper MLX](https://github.com/mustafaaljadery/lightning-whisper-mlx?tab=readme-ov-file#lightning-whisper-mlx)
- [MLX Audio Whisper](https://github.com/huggingface/mlx-audio) - Fast Whisper inference on Apple Silicon
- [Parakeet TDT](https://huggingface.co/nvidia/parakeet-tdt-1.1b) - Real-time streaming STT with sub-100ms latency on Apple Silicon (CUDA/CPU via nano-parakeet, no NeMo)
- [Paraformer - FunASR](https://github.com/modelscope/FunASR)

**LLM**
- Any instruction-following model on the [Hugging Face Hub](https://huggingface.co/models?pipeline_tag=text-generation&sort=trending) via Transformers 🤗
- [mlx-lm](https://github.com/ml-explore/mlx-examples/blob/main/llms/README.md)
- [OpenAI API](https://platform.openai.com/docs/quickstart)

**TTS**
- [ChatTTS](https://github.com/2noise/ChatTTS?tab=readme-ov-file)
- [Pocket TTS](https://github.com/kyutai-labs/pocket-tts) - Streaming TTS with voice cloning from Kyutai Labs
- [Kokoro-82M](https://huggingface.co/hexgrad/Kokoro-82M) - Fast and high-quality TTS optimized for Apple Silicon
- [Qwen3-TTS](https://huggingface.co/Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice)

## Setup

Install the default package from PyPI:
```bash
pip install speech-to-speech
```

The default install is scoped to the standard realtime voice-agent path:
- Parakeet TDT for STT
- OpenAI-compatible API for the language model
- Qwen3-TTS for speech output
- local audio and realtime server modes

Optional backends are installed with extras:
```bash
pip install "speech-to-speech[kokoro]"
pip install "speech-to-speech[pocket]"
pip install "speech-to-speech[faster-whisper]"
pip install "speech-to-speech[paraformer]"
pip install "speech-to-speech[mlx-lm]"
pip install "speech-to-speech[websocket]"
```

Deprecated model implementations, including MeloTTS, live in [`archive/`](./archive) and are no longer wired into the CLI.

For development from source:
```bash
git clone https://github.com/huggingface/speech-to-speech.git
cd speech-to-speech
uv sync
```

This installs the `speech_to_speech` package in editable mode and makes the `speech-to-speech` CLI command available. The project uses a single `pyproject.toml` with platform markers, so macOS and non-macOS dependencies are resolved automatically from one file.

**Note on DeepFilterNet:** DeepFilterNet (used for optional audio enhancement in VAD) requires `numpy<2` and conflicts with Pocket TTS, which requires `numpy>=2`. Install DeepFilterNet manually only in environments where you are not using Pocket TTS.


## Usage

The default CLI is equivalent to a realtime Parakeet + OpenAI-compatible LLM + Qwen3-TTS setup. It uses `OPENAI_API_KEY` from the environment unless `--open_api_api_key` is provided:
```bash
speech-to-speech
```

The pipeline can be run in four ways:
- **Realtime approach**: Models run locally or on a server, and an OpenAI Realtime-compatible WebSocket API is exposed for another app.
- **Server/Client approach**: Models run on a server, and audio input/output are streamed from a client using TCP sockets.
- **WebSocket approach**: Models run on a server, and audio input/output are streamed from a client using WebSockets.
- **Local approach**: Runs locally.

### Recommended setup 

### Realtime Approach

The default realtime setup uses `--llm open_api`, so it needs an OpenAI API key. Export `OPENAI_API_KEY` before launching, or pass `--open_api_api_key` explicitly. For a deployed OpenAI-compatible LLM, also set `--open_api_base_url`.

```bash
export OPENAI_API_KEY=...
```

The default mode starts the OpenAI Realtime-compatible server:
```bash
speech-to-speech
```

This is equivalent to:
```bash
speech-to-speech \
    --thresh 0.6 \
    --stt parakeet-tdt \
    --llm open_api \
    --tts qwen3 \
    --qwen3_tts_model_name Qwen/Qwen3-TTS-12Hz-1.7B-CustomVoice \
    --qwen3_tts_speaker Aiden \
    --qwen3_tts_language auto \
    --qwen3_tts_non_streaming_mode True \
    --qwen3_tts_mlx_quantization 6bit \
    --open_api_model_name gpt-5.4-mini \
    --open_api_chat_size 30 \
    --open_api_stream \
    --enable_live_transcription \
    --mode realtime
```

### Server/Client Approach

1. Run the pipeline on the server:
   ```bash
   speech-to-speech --recv_host 0.0.0.0 --send_host 0.0.0.0
   ```

2. Run the client locally to handle microphone input and receive generated audio:
   ```bash
   python scripts/listen_and_play.py --host <IP address of your server>
   ```

### WebSocket Approach

1. Run the pipeline with WebSocket mode:
   ```bash
   speech-to-speech --mode websocket --ws_host 0.0.0.0 --ws_port 8765
   ```

2. Connect to the WebSocket server from your client application at `ws://<server-ip>:8765`. The server handles bidirectional audio streaming:
   - Send raw audio bytes to the server (16kHz, int16, mono)
   - Receive generated audio bytes from the server

### Local Approach (Mac)

1. For optimal settings on Mac:
   ```bash
   speech-to-speech --local_mac_optimal_settings
   ```

   You can also specify a particular LLM model:
   ```bash
   speech-to-speech \
       --local_mac_optimal_settings \
       --lm_model_name mlx-community/Qwen3-4B-Instruct-2507-bf16
   ```

This setting:
   - Adds `--device mps` to use MPS for all models.
   - Sets [Parakeet TDT](https://huggingface.co/nvidia/parakeet-tdt-1.1b) for STT (fast streaming ASR on Apple Silicon)
   - Sets MLX LM for the language model (uses `--lm_model_name` to specify the model)
   - Sets Qwen3-TTS for TTS
   - `--tts pocket` and `--tts kokoro` are also valid TTS options on macOS.
   - Qwen3 on Apple Silicon uses `mlx-audio` and defaults to the `6bit` MLX variant unless you explicitly select a different quantization or model suffix.
   - To compare the MLX variants locally, run:
     ```bash
     python scripts/benchmark_tts.py \
         --handlers qwen3 \
         --iterations 3 \
         --qwen3_mlx_quantizations bf16 4bit 6bit 8bit
     ```

### Docker Server

#### Install the NVIDIA Container Toolkit

https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/latest/install-guide.html

#### Start the docker container
```docker compose up```



### Recommended usage with Cuda

Leverage Torch Compile for Whisper with Pocket TTS for a simple low-latency setup:

```bash
speech-to-speech \
	--lm_model_name microsoft/Phi-3-mini-4k-instruct \
	--stt_compile_mode reduce-overhead \
  --tts pocket \
  --recv_host 0.0.0.0 \
	--send_host 0.0.0.0 
```

### Multi-language Support

The pipeline currently supports English, French, Spanish, Chinese, Japanese, and Korean.  
Two use cases are considered:

- **Single-language conversation**: Enforce the language setting using the `--language` flag, specifying the target language code (default is 'en').
- **Language switching**: Set `--language` to 'auto'. The STT detects the language of each spoken prompt and forwards it to the LLM. Optionally, opt in with `--lm_enable_lang_prompt` (or `--open_api_enable_lang_prompt` for the OpenAI-compatible backend) to also append a "`Please reply to my message in ...`" instruction so the LLM replies in the detected language. Both flags default to `False` — large LLMs usually pick up the language from context on their own, but the explicit instruction can help smaller models stay in the right language.

Please note that you must use STT and LLM checkpoints compatible with the target language(s). For multilingual TTS, use ChatTTS or another backend that supports the target language.

#### With the server version:

For automatic language detection:

```bash
speech-to-speech \
    --stt whisper-mlx \
    --stt_model_name large-v3 \
    --language auto \
    --llm mlx-lm \
    --lm_model_name mlx-community/Qwen3-4B-Instruct-2507-bf16
```

Or for one language in particular, chinese in this example

```bash
speech-to-speech \
    --stt whisper-mlx \
    --stt_model_name large-v3 \
    --language zh \
    --llm mlx-lm \
    --lm_model_name mlx-community/Qwen3-4B-Instruct-2507-bf16
```

#### Local Mac Setup

For automatic language detection (note: `--stt whisper-mlx` overrides the default parakeet-tdt from optimal settings, since Whisper `large-v3` has broader language coverage):

```bash
speech-to-speech \
    --local_mac_optimal_settings \
    --stt whisper-mlx \
    --stt_model_name large-v3 \
    --language auto \
    --lm_model_name mlx-community/Qwen3-4B-Instruct-2507-bf16
```

Or for one language in particular, chinese in this example

```bash
speech-to-speech \
    --local_mac_optimal_settings \
    --stt whisper-mlx \
    --stt_model_name large-v3 \
    --language zh \
    --lm_model_name mlx-community/Qwen3-4B-Instruct-2507-bf16
```

### Using Pocket TTS

Pocket TTS from Kyutai Labs provides streaming TTS with voice cloning capabilities. To use it:

```bash
speech-to-speech \
    --tts pocket \
    --pocket_tts_voice jean \
    --pocket_tts_device cpu
```

Available voice presets: `alba`, `marius`, `javert`, `jean`, `fantine`, `cosette`, `eponine`, `azelma`. You can also use custom voice files or HuggingFace paths.

## Command-line Usage

> **_NOTE:_** References for all the CLI arguments can be found directly in the [arguments classes](./src/speech_to_speech/arguments_classes) or by running `speech-to-speech -h`.

### Module level Parameters 
See [ModuleArguments](./src/speech_to_speech/arguments_classes/module_arguments.py) class. Allows to set:
- a common `--device` (if one wants each part to run on the same device)
- `--mode` `local` or `server`
- chosen STT implementation 
- chosen LM implementation
- chose TTS implementation
- logging level

### VAD parameters
See [VADHandlerArguments](./src/speech_to_speech/arguments_classes/vad_arguments.py) class. Notably:
- `--thresh`: Threshold value to trigger voice activity detection.
- `--min_speech_ms`: Minimum duration of detected voice activity to be considered speech.
- `--min_silence_ms`: Minimum length of silence intervals for segmenting speech, balancing sentence cutting and latency reduction.


### STT, LM and TTS parameters

`model_name`, `torch_dtype`, and `device` are exposed for each implementation of the Speech to Text, Language Model, and Text to Speech. Specify the targeted pipeline part with the corresponding prefix (e.g. `stt`, `lm` or `tts`, check the implementations' [arguments classes](./src/speech_to_speech/arguments_classes) for more details).

For example:
```bash
--lm_model_name google/gemma-2b-it
```

### Generation parameters

Other generation parameters of the model's generate method can be set using the part's prefix + `_gen_`, e.g., `--stt_gen_max_new_tokens 128`. These parameters can be added to the pipeline part's arguments class if not already exposed.

## Citations

### Silero VAD
```bibtex
@misc{Silero VAD,
  author = {Silero Team},
  title = {Silero VAD: pre-trained enterprise-grade Voice Activity Detector (VAD), Number Detector and Language Classifier},
  year = {2021},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/snakers4/silero-vad}},
  commit = {insert_some_commit_here},
  email = {hello@silero.ai}
}
```

### Distil-Whisper
```bibtex
@misc{gandhi2023distilwhisper,
      title={Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo Labelling},
      author={Sanchit Gandhi and Patrick von Platen and Alexander M. Rush},
      year={2023},
      eprint={2311.00430},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```

### Parler-TTS
```bibtex
@misc{lacombe-etal-2024-parler-tts,
  author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi},
  title = {Parler-TTS},
  year = {2024},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/huggingface/parler-tts}}
}
```
