Metadata-Version: 2.4
Name: finchvox
Version: 0.0.4
Summary: Voice AI observability dev tool for Pipecat
License-Expression: MIT
License-File: LICENSE
Requires-Python: >=3.10
Requires-Dist: aiofiles>=24.1.0
Requires-Dist: aiohttp>=3.9.0
Requires-Dist: aiortc>=1.14.0
Requires-Dist: fastapi>=0.115.0
Requires-Dist: grpcio>=1.60.0
Requires-Dist: loguru>=0.7.0
Requires-Dist: opentelemetry-exporter-otlp-proto-grpc>=1.27.0
Requires-Dist: opentelemetry-instrumentation-logging>=0.48b0
Requires-Dist: opentelemetry-proto>=1.27.0
Requires-Dist: opentelemetry-sdk>=1.27.0
Requires-Dist: pipecat-ai[cartesia,daily,deepgram,silero]>=0.0.98
Requires-Dist: protobuf<6.0.0,>=4.25.0
Requires-Dist: python-multipart>=0.0.9
Requires-Dist: uvicorn[standard]>=0.32.0
Provides-Extra: dev
Requires-Dist: pytest-asyncio>=0.23.0; extra == 'dev'
Requires-Dist: pytest-cov>=4.0.0; extra == 'dev'
Requires-Dist: pytest>=7.0.0; extra == 'dev'
Requires-Dist: ruff>=0.14.10; extra == 'dev'
Requires-Dist: twine>=6.2.0; extra == 'dev'
Description-Content-Type: text/markdown

# <img src="ui/images/finchvox-logo.png" height=24 /> Finchvox - elevated debugging for Pipecat Voice AI
Do your eyes bleed like a Vecna victim watching Pipecat logs fly by? Does flipping between audio recordings, transcripts, and logs damage your ⌘+tab keys from frequent use? If so, meet Finchvox, a local debugger purpose-built for Voice AI apps. 

Finchvox unifies conversation audio, logs, and traces in a single UI, highlighting voice-specific problems like interruptions and high user <-> bot latency.

_👇 Click the image for a short video:_
<a href="https://raw.githubusercontent.com/itsderek23/finchvox/refs/heads/main/docs/demo.gif" target="_blank"><img src="./docs/screenshot.png" /></a>

## Table of Contents

- [Prerequisites](#prerequisites)
- [Installation](#installation)
- [Setup](#setup)
- [Configuration](#configuration)
- [Usage](#usage---finchvox-server)
- [Troubleshooting](#troubleshooting)

## Prerequisites

- Python 3.10 or higher
- A Pipecat Voice AI application

## Installation

```bash
# uv
uv add finchvox "pipecat-ai[tracing]"

# Or with pip
pip install finchvox "pipecat-ai[tracing]"
```

## Setup

1. Add the following to the top of your bot (e.g., `bot.py`):

```python
import finchvox
from finchvox import FinchvoxProcessor

finchvox.init(service_name="my-voice-app")
```

2. Add `FinchvoxProcessor` to your pipeline, ensuring it comes after `transport.output()`:

```python
pipeline = Pipeline([
    # SST, LLM, TTS, etc. processors
    transport.output(),
    FinchvoxProcessor(), # Must come after transport.output()
    context_aggregator.assistant(),
])
```

3. Initialize your `PipelineTask` with metrics, tracing and turn tracking enabled:

```python
task = PipelineTask(
    pipeline,
    params=PipelineParams(enable_metrics=True),
    enable_tracing=True,
    enable_turn_tracking=True,
)
```

## Configuration

The `finchvox.init()` function accepts the following optional parameters:

| Parameter | Default | Description |
|-----------|---------|-------------|
| `endpoint` | `"http://localhost:4317"` | Finchvox collector endpoint |
| `insecure` | `True` | Use insecure gRPC connection (no TLS) |
| `capture_logs` | `True` | Send logs to collector alongside traces |
| `log_modules` | `None` | Additional module prefixes to capture (e.g., `["myapp"]`) |

By default, logs from `pipecat.*`, `finchvox.*`, and `__main__` are captured. Use `log_modules` to include logs from your own modules.

## Usage - Finchvox server

```bash
uv run finchvox start
```

For the list of available options, run:

```bash
uv run finchvox --help
```

## Troubleshooting

### No spans being written

1. Check collector is running: Look for "OTLP collector listening on port 4317" log message
2. Verify client endpoint: Ensure Pipecat is configured to send to `http://localhost:4317`
