// open source audio pipeline
Script → Voice → Production.
Fully automated.
Transform AI-generated scripts into fully produced podcasts and audiobooks. Multi-voice TTS via ElevenLabs, layered sound effects, music beds, and ambient audio — all orchestrated from a single structured script format.
// how it works
From raw idea to finished audio in five stages.
// audio demos
Sample outputs generated entirely by xil-pipeline — AI script, ElevenLabs voices, layered audio.
// Demo audio coming soon — watch the repo for release announcements.
// quick start
Install, write a script, render audio. That's the whole loop.
# Scaffold a new project workspace (creates a copy of the sample script)
$ xil-init my-show --show "My Podcast"
$ cd my-show
# Scan the sample script (pre-flight check)
$ xil-scan scripts/sample_S01E01.md
# Parse into structured JSON
$ xil-parse scripts/sample_S01E01.md --episode S01E01
# Preview TTS character cost (no API calls)
$ xil-produce --episode S01E01 --dry-run
# Generate voice and SFX stems (requires ELEVENLABS_API_KEY — see Environment below)
$ xil-produce --episode S01E01
# Export DAW layers for mixing in Audacity
$ xil-daw --episode S01E01
# Produce final master MP3
$ xil-master --episode S01E01
// documentation
Everything you need to build, extend, and contribute to xil-pipeline.
// capabilities
A full audio production stack, driven by code.
// open source
xil-pipeline is early-stage and actively welcoming contributors. Every PR counts.
Fork the repo on GitHub and clone it locally. Read CONTRIBUTING.md to get oriented.
Browse open issues tagged good first issue or help wanted to find a good starting point.
Run the test suite, make your changes, and add tests for new functionality.
Submit your pull request with a clear description. We review quickly and give constructive feedback.