# GenAI Processors Documentation for LLMs

The GenAI Processors library provides composable, async building blocks for generative AI pipelines.

## 🚀 Best Practices & Gotchas
* **Pass inputs directly**: `ProcessorContentTypes` and `ProcessorPartTypes` are wide Unions that accept many existing content types, including plain strings. `ContentStream` automatically coerces them into `ProcessorPart`s, so you do not need to explicitly wrap strings. Processors can injest `ProcessorContentTypes`, `stream_content` is not required. Reference `content_api.py` for full details on what types are accepted.
* **Preserve multimodal content**: Do not narrow content to `.text()` needlessly and too early. Models accept multimodal content (images, audio, etc.), so keep it as `ProcessorContentTypes` and only narrow it to text at the last possible moment, if at all needed.
* **Safe `.text` access**: Non-text parts throw `ValueError` when accessing `.text`. You can use `content_api.is_text(part.mimetype)` to check if the part is text. But if the code can't handle non text parts properly anyways, just let the exception be risen. The `part.text` is always present and not None, don't use `hasattr` or `is not None` to validate that.
* **Processor Output Gathering**: A processor call `await processor(input_content).gather()` is the best and simplest way to execute a processor and accumulate all outputs into a single `ProcessorContent` object. **DO NOT** use `apply_async` or manual loops to iterate and accumulate parts unless you need to process them sequentially as they arrive.
* **CLI Input**: Use `genai_processors.core.text.terminal_input("User: ")` rather than manual `input()` loops; it handles async yielding, `end_of_turn`, and exits implicitly.
* **Conversation Context**: Turn based models require your code to maintain a conversation history. Live models to it for you.

## 🔎 Core Modules (`genai_processors.core.*`)
* `genai_model.py`: Google GenAI API integration (`GenaiModel`).
* `text.py`: Shell IO (`terminal_input`, `terminal_output`), URL extractors, match processors.
* `function_calling.py`: Function schemas and tool calling support.
* `live_model.py` / `realtime.py`: Streaming and WebRTC interfaces.
* `filesystem.py`, `web.py`, `github.py`, `pdf.py`: File IO, fetching, and document parsing.
* `audio.py`, `video.py`: Multimodal processors.

## 📖 Where to Look for Help
* **Documentation Guides** (`documentation/docs/`):
  * `getting-started.md` / `index.md`: Setup and basics.
  * `concepts/processor.md` / `concepts/async-streaming.md`: Core framework logic.
  * `development/caching.md` / `development/tracing.md`: Optimizations and tracing.
  * `rapid-prototyping/ai-studio.md` / `rapid-prototyping/cli.md`: App deployment references.
* **Examples** (`examples/` directory):
  * `chat.py`: Standard chatbot architecture.
  * `live_simple_cli.py` / `realtime_simple_cli.py`: Streaming architectures.
  * `research/`: Modular multi-agent composition.