Metadata-Version: 2.4
Name: veotools
Version: 0.2.0
Summary: A Python toolkit for AI-powered video generation and seamless stitching using Google's Veo models.
Project-URL: Homepage, https://github.com/frontboat/veotools
Project-URL: Repository, https://github.com/frontboat/veotools
Project-URL: Issues, https://github.com/frontboat/veotools/issues
Author: frontboat
License: MIT
License-File: LICENSE
Keywords: generation,google-genai,mcp,stitching,veo,video
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Multimedia :: Video
Classifier: Topic :: Software Development :: Libraries
Requires-Python: >=3.10
Requires-Dist: google-genai>=1.56.0
Requires-Dist: numpy>=1.26.0
Requires-Dist: opencv-python>=4.8.0
Requires-Dist: pydantic>=2.11.0
Requires-Dist: python-dotenv>=1.0.0
Requires-Dist: requests>=2.31.0
Provides-Extra: dev
Requires-Dist: build>=1.0.0; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.21.0; extra == 'dev'
Requires-Dist: pytest-cov>=4.1.0; extra == 'dev'
Requires-Dist: pytest-mock>=3.11.0; extra == 'dev'
Requires-Dist: pytest-xdist>=3.3.0; extra == 'dev'
Requires-Dist: pytest>=7.4.0; extra == 'dev'
Requires-Dist: twine>=4.0.0; extra == 'dev'
Provides-Extra: docs
Requires-Dist: mkdocs-autorefs>=0.5.0; extra == 'docs'
Requires-Dist: mkdocs-material>=9.5.0; extra == 'docs'
Requires-Dist: mkdocs>=1.5.0; extra == 'docs'
Requires-Dist: mkdocstrings[python]>=0.24.0; extra == 'docs'
Provides-Extra: mcp
Requires-Dist: mcp[cli]>=1.25.0; extra == 'mcp'
Description-Content-Type: text/markdown

# Veotools

![PyPI - Downloads](https://img.shields.io/pypi/dm/veotools?style=flat-square&logo=python&labelColor=black&color=blue)

A Python toolkit for orchestrating multi-scene videos with Google Veo. Handles scene planning, video rendering, overlap-aware stitching, and progress tracking.

## Features
- **One-command stories** – Turn ideas into storyboards, render clips, and deliver stitched videos
- **Gemini planning** – Generate cinematic shot lists with consistent characters and dialogue
- **Veo 3.1 support** – Full SDK access with latest Veo 3.1 features including reference images, video extension, and frame interpolation
- **Audio-preserving stitching** – FFmpeg pipeline maintains perfect audio alignment
- **Python-first** – Clean API for programmatic video generation workflows

## Installation

```bash
pip install veotools # CLI and SDK
pip install "veotools[mcp]" # Add MCP server support
pip install -e ".[dev,mcp]" # Contribute / run tests locally

# Environment (choose your models / region access)
export GEMINI_API_KEY="your-gemini-key"

# Daydreams Router support (optional)
export VEO_PROVIDER="daydreams"
export DAYDREAMS_API_KEY="sk-router-..."
# export DAYDREAMS_BASE_URL="https://api-beta.daydreams.systems/v1" # override if needed
```

Set `VEO_PROVIDER=daydreams` to proxy prompts and video jobs through the Daydreams Router. Omit the flag (or set it to `google`) to call the Google GenAI SDK directly.

## Quick start (CLI)

```bash
# Plan + render + stitch in one command
veo plan-run \
  --idea "N64 Japanese retro explainer about the x402 protocol" \
  --save-plan output-plans/x402.json \
  --execute-model veo-3.0-generate-001 \
  --seed-last-frame --seed-offset -0.25
```

Outputs land under `output/`:
- `output-plans/x402.json` – Gemini storyboard (optional if you omit `--save-plan`).
- `output/videos/video_*.mp4` – Individual clips.
- `output/videos/stitched_*.mp4` – Final stitched master with audio.

You can also execute an existing plan:

```bash
veo plan-execute --plan output-plans/x402.json --model veo-3.0-generate-001 --seed-last-frame
```

…and generate only the storyboard:

```bash
veo plan --idea "Retro travel vlog" --scenes 4 --save output-plans/vlog.json --json
```

## Quick start (Python)

```python
import veotools as veo

veo.init()  # sets up logging + validates GEMINI_API_KEY

plan = veo.generate_scene_plan(
    "N64 Japanese retro explainer about the x402 protocol",
    number_of_scenes=4,
    additional_context="Keep the tone energetic and educational",
)

result = veo.execute_scene_plan(
    plan,
    model="veo-3.0-generate-001",
    auto_seed_last_frame=True,        # feed each clip the previous clip's final frame
    seed_frame_offset=-0.25,          # extract the seed frame 0.25s before the end
)

print("Rendered clip files:")
for clip in result.clip_results:
    print(" -", clip.path)

if result.final_result:
    print("Final stitched video:", result.final_result.path)
```

## CLI overview
- `veo plan` – Generate / save a Gemini storyboard (structured JSON).
- `veo plan-execute` – Render clips + stitch an existing plan.
- `veo plan-run` – Plan and execute in one shot (see quick start).
- `veo generate` / `veo continue` – Low-level Veo helpers for ad hoc clips.
- `veo list-models`, `veo preflight` – Environment + model diagnostics.

Run `veo <command> --help` for full flag descriptions.

## Python API map

| Task | Function(s) |
|------|-------------|
| Scene planning | `generate_scene_plan`, `SceneWriter` |
| Plan execution | `execute_scene_plan`, `PlanExecutionResult` |
| Single clip generation | `generate_from_text`, `generate_from_image`, `generate_from_video` |
| Veo 3.1 features | `extend_video`, `generate_with_reference_images`, `generate_with_interpolation` |
| Media analysis | `extract_frame`, `extract_frames`, `get_video_info` |
| Stitching | `stitch_videos`, `stitch_with_transitions`, `create_transition_points` |
| Workflow chaining | `Bridge` (fluent API for multi-step stories) |

### Veo 3.1 features

```python
from veotools import extend_video, generate_with_reference_images, generate_with_interpolation
from pathlib import Path

# Extend an existing video (7-second extensions, up to 20x)
result = extend_video(
    video_path=Path("input.mp4"),
    prompt="Continue with a dramatic reveal",
    model="veo-3.1-generate-preview"
)

# Generate with reference images for character/scene consistency
result = generate_with_reference_images(
    prompt="A character walking through the forest",
    reference_images=["char_ref.jpg", "forest_ref.jpg"],
    model="veo-3.1-generate-preview"
)

# Interpolate between first and last frames
result = generate_with_interpolation(
    first_frame=Path("start.jpg"),
    last_frame=Path("end.jpg"),
    prompt="Smooth transition showing time passing",
    model="veo-3.1-generate-preview"
)
```

### Model / safety notes
- Veo 3.x text-to-video clips default to `person_generation="allow_all"`; image/video-seeded clips default to `allow_adult`. Override via keyword arguments if you need a different policy.
- Use `list_models(include_remote=True)` to discover the Veo variants your account can access (stable IDs: `veo-3.1-generate-001`, `veo-3.0-generate-001`, `veo-3.0-fast-generate-001`, etc.).

### Supported models

| Model | Features |
|-------|----------|
| `veo-3.1-generate-preview` | Reference images, video extension, frame interpolation, audio, 1080p |
| `veo-3.1-fast-generate-preview` | Same as above, optimized for speed |
| `veo-3.0-generate-001` | Audio, resolution control, seed support |
| `veo-3.0-fast-generate-001` | Same as above, optimized for speed |
| `veo-2.0-generate-001` | Duration control, enhance prompt, FPS control |

### Storage layout
- `output/videos/` – All rendered clips, stitched deliverables, and intermediate assets.
- `output/frames/` – Extracted stills (including auto-seed frames when enabled).
- `output/ops/` – MCP job records, cached model lists, etc.
- Override with `VEO_OUTPUT_DIR` or pass a custom base path to `StorageManager`.

## Examples
- `examples/plan_run.py` – Full idea → plan → render workflow in ~30 lines.
- `examples/text_to_video.py` – Minimal text-to-video clip generator.
- `examples/bridge_story.py` – Build a three-scene story with the fluent `Bridge` API.

## Contributing & tests
Pull requests are welcome! Please:
1. Install with `pip install -e ".[dev,mcp]"`.
2. Run `pytest` (the suite relies on mocks; no live API calls).
3. Update docs/examples when behaviour changes.

MCP users can launch the server with `veo-mcp` or `python -m veotools.server.mcp_server`.
