Metadata-Version: 2.4
Name: narrative-ai-framework
Version: 0.2.1
Summary: AI-powered voice diary framework: STT, TTS, LLM, RAG, and voice-agent engines
Author: Narrative AI Team
License: MIT
Keywords: ai,voice,diary,stt,tts,llm,rag,arabic
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: PyYAML>=6.0.1
Requires-Dist: python-dotenv>=1.0.0
Requires-Dist: pydantic>=2.5.0
Requires-Dist: aiohttp>=3.9.0
Requires-Dist: aiofiles>=23.2.1
Requires-Dist: requests>=2.28.0
Requires-Dist: shortuuid>=1.0.11
Requires-Dist: pyngrok>=7.0.0
Requires-Dist: nest-asyncio>=1.5.8
Requires-Dist: sympy>=1.12
Provides-Extra: stt
Requires-Dist: soundfile>=0.12.1; extra == "stt"
Requires-Dist: scipy>=1.11.0; extra == "stt"
Requires-Dist: webrtcvad>=2.0.10; extra == "stt"
Requires-Dist: numpy>=1.24.0; extra == "stt"
Requires-Dist: elevenlabs>=0.2.0; extra == "stt"
Requires-Dist: yt-dlp>=2023.11.0; extra == "stt"
Requires-Dist: pydub>=0.25.1; extra == "stt"
Requires-Dist: transformers>=4.36.0; extra == "stt"
Requires-Dist: accelerate>=0.25.0; extra == "stt"
Requires-Dist: torch>=2.1.0; extra == "stt"
Requires-Dist: ctranslate2>=4.0.0; extra == "stt"
Requires-Dist: faster-whisper>=1.0.0; extra == "stt"
Provides-Extra: tts
Requires-Dist: aiohttp>=3.9.0; extra == "tts"
Requires-Dist: numpy>=1.24.0; extra == "tts"
Provides-Extra: ocr
Requires-Dist: opencv-python>=4.8.0; extra == "ocr"
Requires-Dist: scikit-image>=0.21.0; extra == "ocr"
Requires-Dist: pdf2image>=1.16.3; extra == "ocr"
Requires-Dist: python-docx>=1.1.0; extra == "ocr"
Requires-Dist: einops>=0.6.1; extra == "ocr"
Requires-Dist: torch>=2.0.1; extra == "ocr"
Requires-Dist: torchvision>=0.15.2; extra == "ocr"
Requires-Dist: transformers>=4.45.0; extra == "ocr"
Requires-Dist: accelerate>=0.26.0; extra == "ocr"
Requires-Dist: qwen-vl-utils>=0.0.4; extra == "ocr"
Requires-Dist: timm>=0.9.2; extra == "ocr"
Requires-Dist: basicsr>=1.4.2; extra == "ocr"
Requires-Dist: realesrgan>=0.3.0; extra == "ocr"
Provides-Extra: llm
Requires-Dist: google-generativeai>=0.3.0; extra == "llm"
Requires-Dist: google-genai>=0.3.0; extra == "llm"
Requires-Dist: openai>=1.3.0; extra == "llm"
Requires-Dist: anthropic>=0.18.0; extra == "llm"
Requires-Dist: tiktoken>=0.5.0; extra == "llm"
Provides-Extra: voice
Requires-Dist: livekit>=0.11.0; extra == "voice"
Requires-Dist: livekit-api>=0.4.0; extra == "voice"
Requires-Dist: livekit-agents>=0.7.0; extra == "voice"
Requires-Dist: livekit-plugins-silero>=0.6.0; extra == "voice"
Requires-Dist: livekit-plugins-elevenlabs>=1.3.0; extra == "voice"
Requires-Dist: livekit-plugins-turn-detector>=1.3.0; extra == "voice"
Requires-Dist: livekit-plugins-noise-cancellation>=0.2.0; extra == "voice"
Requires-Dist: sounddevice>=0.5.0; extra == "voice"
Provides-Extra: db
Requires-Dist: SQLAlchemy>=2.0.0; extra == "db"
Requires-Dist: asyncpg>=0.29.0; extra == "db"
Requires-Dist: psycopg2-binary>=2.9.0; extra == "db"
Requires-Dist: alembic>=1.13.0; extra == "db"
Requires-Dist: redis>=5.0.0; extra == "db"
Provides-Extra: security
Requires-Dist: redis>=5.0.0; extra == "security"
Requires-Dist: SQLAlchemy>=2.0.0; extra == "security"
Requires-Dist: cryptography>=41.0.0; extra == "security"
Requires-Dist: PyJWT>=2.8.0; extra == "security"
Requires-Dist: bcrypt>=4.0.0; extra == "security"
Provides-Extra: api
Requires-Dist: fastapi>=0.109.0; extra == "api"
Requires-Dist: uvicorn[standard]>=0.27.0; extra == "api"
Requires-Dist: python-multipart>=0.0.6; extra == "api"
Requires-Dist: email-validator>=2.1.0; extra == "api"
Provides-Extra: rag
Requires-Dist: sentence-transformers>=2.2.2; extra == "rag"
Requires-Dist: FlagEmbedding>=1.3.5; extra == "rag"
Requires-Dist: pillow>=10.0.0; extra == "rag"
Requires-Dist: psutil>=5.9.0; extra == "rag"
Requires-Dist: unstructured[all-docs]>=0.10.0; extra == "rag"
Requires-Dist: python-magic>=0.4.27; extra == "rag"
Requires-Dist: pytesseract>=0.3.10; extra == "rag"
Requires-Dist: pgvector>=0.2.5; extra == "rag"
Requires-Dist: qdrant-client>=1.7.0; extra == "rag"
Provides-Extra: web
Requires-Dist: ddgs>=9.0.0; extra == "web"
Provides-Extra: vlm
Requires-Dist: pillow>=10.0.0; extra == "vlm"
Requires-Dist: numpy>=1.24.0; extra == "vlm"
Requires-Dist: ollama>=0.1.0; extra == "vlm"
Provides-Extra: all
Requires-Dist: soundfile>=0.12.1; extra == "all"
Requires-Dist: scipy>=1.11.0; extra == "all"
Requires-Dist: webrtcvad>=2.0.10; extra == "all"
Requires-Dist: numpy>=1.24.0; extra == "all"
Requires-Dist: elevenlabs>=0.2.0; extra == "all"
Requires-Dist: yt-dlp>=2023.11.0; extra == "all"
Requires-Dist: pydub>=0.25.1; extra == "all"
Requires-Dist: transformers>=4.36.0; extra == "all"
Requires-Dist: accelerate>=0.25.0; extra == "all"
Requires-Dist: torch>=2.1.0; extra == "all"
Requires-Dist: ctranslate2>=4.0.0; extra == "all"
Requires-Dist: faster-whisper>=1.0.0; extra == "all"
Requires-Dist: aiohttp>=3.9.0; extra == "all"
Requires-Dist: google-generativeai>=0.3.0; extra == "all"
Requires-Dist: google-genai>=0.3.0; extra == "all"
Requires-Dist: openai>=1.3.0; extra == "all"
Requires-Dist: anthropic>=0.18.0; extra == "all"
Requires-Dist: tiktoken>=0.5.0; extra == "all"
Requires-Dist: livekit>=0.11.0; extra == "all"
Requires-Dist: livekit-api>=0.4.0; extra == "all"
Requires-Dist: livekit-agents>=0.7.0; extra == "all"
Requires-Dist: livekit-plugins-silero>=0.6.0; extra == "all"
Requires-Dist: livekit-plugins-elevenlabs>=1.3.0; extra == "all"
Requires-Dist: livekit-plugins-turn-detector>=1.3.0; extra == "all"
Requires-Dist: livekit-plugins-noise-cancellation>=0.2.0; extra == "all"
Requires-Dist: sounddevice>=0.5.0; extra == "all"
Requires-Dist: SQLAlchemy>=2.0.0; extra == "all"
Requires-Dist: asyncpg>=0.29.0; extra == "all"
Requires-Dist: psycopg2-binary>=2.9.0; extra == "all"
Requires-Dist: alembic>=1.13.0; extra == "all"
Requires-Dist: redis>=5.0.0; extra == "all"
Requires-Dist: cryptography>=41.0.0; extra == "all"
Requires-Dist: PyJWT>=2.8.0; extra == "all"
Requires-Dist: bcrypt>=4.0.0; extra == "all"
Requires-Dist: fastapi>=0.109.0; extra == "all"
Requires-Dist: uvicorn[standard]>=0.27.0; extra == "all"
Requires-Dist: python-multipart>=0.0.6; extra == "all"
Requires-Dist: email-validator>=2.1.0; extra == "all"
Requires-Dist: ddgs>=9.0.0; extra == "all"
Requires-Dist: opencv-python>=4.8.0; extra == "all"
Requires-Dist: scikit-image>=0.21.0; extra == "all"
Requires-Dist: pdf2image>=1.16.3; extra == "all"
Requires-Dist: python-docx>=1.1.0; extra == "all"
Requires-Dist: einops>=0.6.1; extra == "all"
Requires-Dist: qwen-vl-utils>=0.0.4; extra == "all"
Requires-Dist: pgvector>=0.2.5; extra == "all"
Requires-Dist: qdrant-client>=1.7.0; extra == "all"
Requires-Dist: ollama>=0.1.0; extra == "all"
Provides-Extra: dev
Requires-Dist: pytest>=7.4.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.23.0; extra == "dev"
Requires-Dist: pytest-cov>=4.1.0; extra == "dev"
Requires-Dist: httpx>=0.25.0; extra == "dev"
Requires-Dist: mypy>=1.7.0; extra == "dev"
Requires-Dist: ruff>=0.1.0; extra == "dev"
Dynamic: license-file

# Narrative AI SDK (v0.2.1)

---

## 🔑 LLM Engine (`nai.llm`)

### `generate()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Generates a complete text response based on a prompt. | `prompt` (str), `model` (str), `max_tokens` (int) | `LLMResponse` (Object) |
```python
import narrative_ai as nai
import asyncio
async def main():
    nai.llm.set_api_key("key", provider="openai")
    res = await nai.llm.generate("Hello")
    print(res.text)
asyncio.run(main())
```

### `generate_stream()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Streams text generation token-by-token for real-time apps. | `prompt` (str), `model` (str) | `AsyncIterator[str]` |
```python
import narrative_ai as nai
import asyncio
async def main():
    nai.llm.set_api_key("key", provider="openai")
    async for chunk in nai.llm.generate_stream("Hi"):
        print(chunk, end="", flush=True)
asyncio.run(main())
```

### `set_api_key()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Configures the API key for a specific LLM provider. | `api_key` (str), `provider` (str) | `None` |
```python
import narrative_ai as nai
nai.llm.set_api_key("sk-...", provider="openai")
```

### `set_llm_provider()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Switches the active LLM provider globally. | `provider` (str) | `None` |
```python
import narrative_ai as nai
nai.llm.set_llm_provider("gemini")
```

### `set_service_url()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Sets a custom base URL for the LLM API. | `url` (str) | `None` |
```python
import narrative_ai as nai
nai.llm.set_service_url("https://api.openai.com/v1")
```

### `get_engine()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Retrieves the underlying LLM engine instance. | `None` | `LLMEngine` |
```python
import narrative_ai as nai
engine = nai.llm.get_engine()
```

### `LLMClient`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Creates a stateful LLM client for session management. | `user_id` (str), `tenant_id` (str) | `LLMClient` |
```python
import narrative_ai as nai
client = nai.llm.LLMClient(user_id="u123")
```

---

## 🎙️ STT Engine (`nai.stt`)

### `transcribe()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Converts an audio file into text. | `audio_path` (str), `language` (str) | `STTResult` |
```python
import narrative_ai as nai
import asyncio
async def main():
    nai.stt.set_api_key("key", provider="elevenlabs")
    res = await nai.stt.transcribe("audio.mp3")
    print(res.text)
asyncio.run(main())
```

### `stream_transcribe()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Transcribes real-time audio streams. | `audio_stream` | `AsyncIterator` |
```python
import narrative_ai as nai
import asyncio
async def main():
    async for result in nai.stt.stream_transcribe(stream):
        print(result.text)
asyncio.run(main())
```

### `set_api_key()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Sets the API key for the STT provider. | `api_key` (str), `provider` (str) | `None` |
```python
import narrative_ai as nai
nai.stt.set_api_key("key", provider="elevenlabs")
```

### `set_stt_provider()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Changes the default STT provider. | `provider` (str) | `None` |
```python
import narrative_ai as nai
nai.stt.set_stt_provider("whisper")
```

### `get_engine()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Retrieves the raw STT engine instance. | `None` | `STTEngine` |
```python
import narrative_ai as nai
engine = nai.stt.get_engine()
```

### `STTClient`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Creates a stateful STT client. | `user_id` (str) | `STTClient` |
```python
import narrative_ai as nai
client = nai.stt.STTClient()
```

---

## 🔊 TTS Engine (`nai.tts`)

### `synthesize()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Converts text into an audio file. | `text` (str), `voice` (str) | `str` (Path) |
```python
import narrative_ai as nai
import asyncio
async def main():
    nai.tts.set_api_key("key", provider="openai")
    path = await nai.tts.synthesize("Hello")
    print(path)
asyncio.run(main())
```

### `stream_synthesize()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Streams synthesized audio bytes. | `text` (str), `voice` (str) | `AsyncIterator[bytes]` |
```python
import narrative_ai as nai
import asyncio
async def main():
    async for chunk in nai.tts.stream_synthesize("Hello"):
        print(len(chunk))
asyncio.run(main())
```

### `set_api_key()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Sets the TTS provider API key. | `api_key` (str), `provider` (str) | `None` |
```python
import narrative_ai as nai
nai.tts.set_api_key("key", provider="openai")
```

### `set_tts_provider()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Changes the TTS engine provider. | `provider` (str) | `None` |
```python
import narrative_ai as nai
nai.tts.set_tts_provider("elevenlabs")
```

### `get_engine()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Retrieves the TTS engine instance. | `None` | `TTSEngine` |
```python
import narrative_ai as nai
engine = nai.tts.get_engine()
```

### `TTSClient`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Creates a stateful TTS client. | `user_id` (str) | `TTSClient` |
```python
import narrative_ai as nai
client = nai.tts.TTSClient()
```

---

## 📚 RAG Engine (`nai.rag`)

### `remember()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Indexes a document into the vector store. | `document` (Doc), `doc_id` (str) | `bool` |
```python
import narrative_ai as nai
import asyncio
async def main():
    nai.rag.set_api_key("key", provider="cohere")
    doc = await nai.input_processor.process("f.pdf")
    await nai.rag.remember(doc, "id1")
asyncio.run(main())
```

### `recall()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Retrieves relevant context based on a query. | `query` (str), `top_k` (int) | `RichContext` |
```python
import narrative_ai as nai
import asyncio
async def main():
    res = await nai.rag.recall("policy info")
    print(res.formatted_text)
asyncio.run(main())
```

### `forget()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Deletes a document from the vector store. | `doc_id` (str) | `bool` |
```python
import narrative_ai as nai
import asyncio
async def main():
    await nai.rag.forget("id1")
asyncio.run(main())
```

### `clear_memory()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Wipes the entire vector database. | `None` | `bool` |
```python
import narrative_ai as nai
import asyncio
async def main():
    await nai.rag.clear_memory()
asyncio.run(main())
```

### `set_api_key()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Sets the embedding provider API key. | `api_key` (str), `provider` (str) | `None` |
```python
import narrative_ai as nai
nai.rag.set_api_key("key", provider="openai")
```

### `get_manager()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Retrieves the internal memory manager. | `None` | `MemoryManager` |
```python
import narrative_ai as nai
mgr = nai.rag.get_manager()
```

### `RAGClient`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Creates a stateful RAG client. | `user_id` (str) | `RAGClient` |
```python
import narrative_ai as nai
client = nai.rag.RAGClient()
```

---

## 👁️ OCR Engine (`nai.ocr`)

### `process_image()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Extracts text from an image. | `image_path` (str) | `OCRResult` |
```python
import narrative_ai as nai
import asyncio
async def main():
    nai.ocr.set_service_url("https://...")
    res = await nai.ocr.process_image("i.jpg")
    print(res.text)
asyncio.run(main())
```

### `process_pdf()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Extracts text from all pages of a PDF. | `pdf_path` (str) | `OCRResult` |
```python
import narrative_ai as nai
import asyncio
async def main():
    res = await nai.ocr.process_pdf("d.pdf")
    print(res.text)
asyncio.run(main())
```

### `set_service_url()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Configures the OCR service endpoint. | `url` (str) | `None` |
```python
import narrative_ai as nai
nai.ocr.set_service_url("https://ocr.api.com")
```

### `set_ocr_provider()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Changes the OCR technology provider. | `provider` (str) | `None` |
```python
import narrative_ai as nai
nai.ocr.set_ocr_provider("google_vision")
```

### `get_pipeline()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Retrieves the OCR processing pipeline. | `None` | `OCRPipeline` |
```python
import narrative_ai as nai
p = nai.ocr.get_pipeline()
```

### `OCRClient`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Creates a stateful OCR client. | `user_id` (str) | `OCRClient` |
```python
import narrative_ai as nai
client = nai.ocr.OCRClient()
```

---

## 🛠️ Input Processor (`nai.input_processor`)

### `process()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Detects file type and routes to the correct engine. | `source` (Any) | `StructuredDocument` |
```python
import narrative_ai as nai
import asyncio
async def main():
    doc = await nai.input_processor.process("any_file.mp3")
    print(doc.text)
asyncio.run(main())
```

### `process_batch()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Processes multiple files concurrently. | `sources` (List) | `List[Doc]` |
```python
import narrative_ai as nai
import asyncio
async def main():
    docs = await nai.input_processor.process_batch(["f1.jpg", "f2.pdf"])
asyncio.run(main())
```

### `process_audio()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Specifically routes to the STT engine. | `path` (str) | `Doc` |
```python
import narrative_ai as nai
import asyncio
async def main():
    doc = await nai.input_processor.process_audio("a.wav")
asyncio.run(main())
```

### `process_document()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Routes to PDF/OCR document processing. | `path` (str) | `Doc` |
```python
import narrative_ai as nai
import asyncio
async def main():
    doc = await nai.input_processor.process_document("d.pdf")
asyncio.run(main())
```

### `process_image()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Routes specifically to Image OCR processing. | `path` (str) | `Doc` |
```python
import narrative_ai as nai
import asyncio
async def main():
    doc = await nai.input_processor.process_image("i.jpg")
asyncio.run(main())
```

### `process_url()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Fetches and processes content from a URL. | `url` (str) | `Doc` |
```python
import narrative_ai as nai
import asyncio
async def main():
    doc = await nai.input_processor.process_url("https://...")
asyncio.run(main())
```

### `InputClient`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Creates a stateful Input Processing client. | `user_id` (str) | `InputClient` |
```python
import narrative_ai as nai
client = nai.input_processor.InputClient()
```

---

## 🤖 Voice Mode (`nai.voice_mode`)

### `start_agent()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Runs the LiveKit conversational agent loop. | `None` | `None` |
```python
import narrative_ai as nai
nai.voice_mode.start_agent()
```

### `set_livekit_config()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Sets LiveKit connection credentials. | `url` (str), `api_key` (str), `api_secret` (str) | `None` |
```python
import narrative_ai as nai
nai.voice_mode.set_livekit_config(url="...", api_key="...", api_secret="...")
```

### `set_agent_name()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Sets the agent's display name. | `name` (str) | `None` |
```python
import narrative_ai as nai
nai.voice_mode.set_agent_name("Jarvis")
```

### `VoiceClient`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Creates a stateful Voice Mode client. | `user_id` (str) | `VoiceClient` |
```python
import narrative_ai as nai
client = nai.voice_mode.VoiceClient()
```

---

## 🔍 Web Intelligence (`nai.web_intel`)

### `search()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Performs a real-time web search. | `query` (str) | `WebResult` |
```python
import narrative_ai as nai
import asyncio
async def main():
    nai.web_intel.set_api_key("key")
    res = await nai.web_intel.search("Current events")
asyncio.run(main())
```

### `research()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Generates a deep research report on a topic. | `topic` (str) | `str` (Markdown) |
```python
import narrative_ai as nai
import asyncio
async def main():
    report = await nai.web_intel.research("Global warming")
    print(report)
asyncio.run(main())
```

### `set_api_key()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Sets the search provider API key. | `api_key` (str) | `None` |
```python
import narrative_ai as nai
nai.web_intel.set_api_key("key")
```

### `get_engine()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Retrieves the Web Intelligence engine instance. | `None` | `WebIntelEngine` |
```python
import narrative_ai as nai
engine = nai.web_intel.get_engine()
```

### `WebIntelClient`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Creates a stateful Web Intel client. | `user_id` (str) | `WebIntelClient` |
```python
import narrative_ai as nai
client = nai.web_intel.WebIntelClient()
```

---

## 🎨 VLM Engine (`nai.vlm`)

### `analyze_image()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Performs visual reasoning on an image. | `image` (Any), `prompt` (str) | `VLMResponse` |
```python
import narrative_ai as nai
import asyncio
async def main():
    nai.vlm.set_api_key("key")
    res = await nai.vlm.analyze_image("i.jpg", "Describe this.")
asyncio.run(main())
```

### `chat_with_image()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Converses with the AI about an image. | `image` (Any), `history` (List) | `VLMResponse` |
```python
import narrative_ai as nai
import asyncio
async def main():
    res = await nai.vlm.chat_with_image("i.jpg", history=[])
asyncio.run(main())
```

### `set_api_key()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Sets the Vision API key. | `api_key` (str) | `None` |
```python
import narrative_ai as nai
nai.vlm.set_api_key("key")
```

### `get_processor()`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Retrieves the VLM processing instance. | `None` | `VLMProcessor` |
```python
import narrative_ai as nai
p = nai.vlm.get_processor()
```

### `VLMClient`
| Description | Inputs | Outputs |
| :--- | :--- | :--- |
| Creates a stateful VLM client. | `user_id` (str) | `VLMClient` |
```python
import narrative_ai as nai
client = nai.vlm.VLMClient()
```

---

## License
MIT License.
