Metadata-Version: 2.3
Name: voiceai-sdk
Version: 0.1.0
Summary: The official Python library for the slng API
Project-URL: Homepage, https://github.com/slng-ai/voiceai-sdk-python
Project-URL: Repository, https://github.com/slng-ai/voiceai-sdk-python
Author-email: Slng <support@slng.ai>
License: Apache-2.0
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: MacOS
Classifier: Operating System :: Microsoft :: Windows
Classifier: Operating System :: OS Independent
Classifier: Operating System :: POSIX
Classifier: Operating System :: POSIX :: Linux
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Typing :: Typed
Requires-Python: >=3.9
Requires-Dist: anyio<5,>=3.5.0
Requires-Dist: distro<2,>=1.7.0
Requires-Dist: httpx<1,>=0.23.0
Requires-Dist: pydantic<3,>=1.9.0
Requires-Dist: sniffio
Requires-Dist: typing-extensions<5,>=4.14
Provides-Extra: aiohttp
Requires-Dist: aiohttp; extra == 'aiohttp'
Requires-Dist: httpx-aiohttp>=0.1.9; extra == 'aiohttp'
Description-Content-Type: text/markdown

# VoiceAI SDK for Python

The official Python SDK for SLNG Voice AI. Use one client for text-to-speech, speech-to-text, and model discovery across SLNG-hosted and provider-backed speech models.

The package supports Python 3.9 and newer, with sync and async clients powered by [httpx](https://github.com/encode/httpx).

## Install

Until the Python package is published to PyPI, install it from GitHub:

```sh
pip install git+https://github.com/slng-ai/voiceai-sdk-python.git
```

For local development in this repository:

```sh
cd sdks/slng-python
uv sync --all-extras
```

## API Key

Create an API key in the [SLNG dashboard](https://app.slng.ai/api-keys), then set it in your environment:

```sh
export SLNG_API_KEY="zpka_..."
```

```python
from voiceai_sdk import Slng

client = Slng()
```

You can also pass `api_key` directly:

```python
client = Slng(api_key="zpka_...")
```

## Text To Speech

Generate audio from text and save it locally:

```python
from voiceai_sdk import Slng

client = Slng()

audio = client.text_to_speech.create(
    model_variant="slng/deepgram/aura:2-en",
    text="Hello from SLNG.",
    voice="aura-2-thalia-en",
)

audio.write_to_file("hello.wav")
```

Use `region` or `world_part` when you need explicit routing:

```python
client.text_to_speech.create(
    model_variant="slng/deepgram/aura:2-en",
    text="Hello from Europe.",
    voice="aura-2-thalia-en",
    region="eu-north-1",
)
```

## Speech To Text

Transcribe a local audio file:

```python
from pathlib import Path
from voiceai_sdk import Slng

client = Slng()

transcript = client.speech_to_text.create(
    model_variant="slng/deepgram/nova:3-en",
    audio=Path("meeting.wav"),
)

print(transcript.text)
```

File uploads accept `bytes`, path-like objects, or `(filename, contents, media_type)` tuples.

## Discover Models

The SDK ships with a static catalog snapshot, so agents and scripts can choose valid models without credentials or network calls.

```python
from voiceai_sdk import get_model, list_models

tts_models = list_models(service="tts", language="en")
stt_models = list_models(service="stt")
aura = get_model("slng/deepgram/aura:2-en")

print([model["id"] for model in tts_models])
print(aura["deployments"] if aura else None)
```

Model IDs from the catalog can be passed directly to `client.text_to_speech.create()` or `client.speech_to_text.create()`.

## Discover Voices

Use `list_voices()` to find voice IDs for a TTS model:

```python
from voiceai_sdk import list_voices

voices = list_voices("slng/deepgram/aura:2-en", language="en")

print([f"{voice['name']}: {voice['voiceId']}" for voice in voices])
```

`list_voices()` returns an empty list when a model does not have a cataloged voice list.

Useful voice references:

- [Voices in the SLNG docs](https://docs.slng.ai/voices/deepgram-aura)
- [Deepgram Aura voices](https://docs.slng.ai/voices/deepgram-aura)
- [Cartesia Sonic 3 voices](https://docs.slng.ai/voices/cartesia-sonic-3)
- [Kugel voices](https://docs.slng.ai/voices/kugel)
- [Murf Falcon voices](https://docs.slng.ai/voices/murf)
- [Orpheus voices](https://docs.slng.ai/voices/orpheus)
- [Rime Arcana voices](https://docs.slng.ai/voices/rime-arcana)
- [Sarvam Bulbul voices](https://docs.slng.ai/voices/sarvam-bulbul)
- [Soniox TTS voices](https://docs.slng.ai/voices/soniox)
- Full voice catalogs and samples are available in the [SLNG dashboard](https://app.slng.ai).

## Async Usage

Use `AsyncSlng` for async applications:

```python
import asyncio
from voiceai_sdk import AsyncSlng


async def main() -> None:
    client = AsyncSlng()
    audio = await client.text_to_speech.create(
        model_variant="slng/deepgram/aura:2-en",
        text="Hello from async SLNG.",
        voice="aura-2-thalia-en",
    )
    await audio.write_to_file("hello.wav")


asyncio.run(main())
```

For lower-level response streaming, use `.with_streaming_response` on SDK resources.

## Reference

- Full API surface: [api.md](https://github.com/slng-ai/voiceai-sdk-python/tree/main/api.md)
- General docs: [docs.slng.ai](https://docs.slng.ai)
- Models by language: [docs.slng.ai/models/by-language](https://docs.slng.ai/models/by-language)
- Source: [github.com/slng-ai/voiceai-sdk-python](https://github.com/slng-ai/voiceai-sdk-python)
