Metadata-Version: 2.4
Name: narrative-ai-framework
Version: 0.1.7
Summary: AI-powered voice diary framework: STT, TTS, LLM, RAG, and voice-agent engines
Author: Narrative AI Team
License: MIT
Keywords: ai,voice,diary,stt,tts,llm,rag,arabic
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: PyYAML>=6.0.1
Requires-Dist: python-dotenv>=1.0.0
Requires-Dist: pydantic>=2.5.0
Requires-Dist: aiohttp>=3.9.0
Requires-Dist: aiofiles>=23.2.1
Requires-Dist: requests>=2.28.0
Requires-Dist: shortuuid>=1.0.11
Requires-Dist: pyngrok>=7.0.0
Requires-Dist: nest-asyncio>=1.5.8
Requires-Dist: sympy>=1.12
Provides-Extra: stt
Requires-Dist: soundfile>=0.12.1; extra == "stt"
Requires-Dist: scipy>=1.11.0; extra == "stt"
Requires-Dist: webrtcvad>=2.0.10; extra == "stt"
Requires-Dist: numpy>=1.24.0; extra == "stt"
Requires-Dist: elevenlabs>=0.2.0; extra == "stt"
Requires-Dist: yt-dlp>=2023.11.0; extra == "stt"
Requires-Dist: pydub>=0.25.1; extra == "stt"
Requires-Dist: transformers>=4.36.0; extra == "stt"
Requires-Dist: accelerate>=0.25.0; extra == "stt"
Requires-Dist: torch>=2.1.0; extra == "stt"
Requires-Dist: ctranslate2>=4.0.0; extra == "stt"
Requires-Dist: faster-whisper>=1.0.0; extra == "stt"
Provides-Extra: tts
Requires-Dist: aiohttp>=3.9.0; extra == "tts"
Requires-Dist: numpy>=1.24.0; extra == "tts"
Provides-Extra: ocr
Requires-Dist: opencv-python>=4.8.0; extra == "ocr"
Requires-Dist: scikit-image>=0.21.0; extra == "ocr"
Requires-Dist: pdf2image>=1.16.3; extra == "ocr"
Requires-Dist: python-docx>=1.1.0; extra == "ocr"
Requires-Dist: einops>=0.6.1; extra == "ocr"
Requires-Dist: torch>=2.0.1; extra == "ocr"
Requires-Dist: torchvision>=0.15.2; extra == "ocr"
Requires-Dist: transformers>=4.45.0; extra == "ocr"
Requires-Dist: accelerate>=0.26.0; extra == "ocr"
Requires-Dist: qwen-vl-utils>=0.0.4; extra == "ocr"
Requires-Dist: timm>=0.9.2; extra == "ocr"
Requires-Dist: basicsr>=1.4.2; extra == "ocr"
Requires-Dist: realesrgan>=0.3.0; extra == "ocr"
Provides-Extra: llm
Requires-Dist: google-generativeai>=0.3.0; extra == "llm"
Requires-Dist: google-genai>=0.3.0; extra == "llm"
Requires-Dist: openai>=1.3.0; extra == "llm"
Requires-Dist: anthropic>=0.18.0; extra == "llm"
Requires-Dist: tiktoken>=0.5.0; extra == "llm"
Provides-Extra: voice
Requires-Dist: livekit>=0.11.0; extra == "voice"
Requires-Dist: livekit-api>=0.4.0; extra == "voice"
Requires-Dist: livekit-agents>=0.7.0; extra == "voice"
Requires-Dist: livekit-plugins-silero>=0.6.0; extra == "voice"
Requires-Dist: livekit-plugins-elevenlabs>=1.3.0; extra == "voice"
Requires-Dist: livekit-plugins-turn-detector>=1.3.0; extra == "voice"
Requires-Dist: livekit-plugins-noise-cancellation>=0.2.0; extra == "voice"
Requires-Dist: sounddevice>=0.5.0; extra == "voice"
Provides-Extra: db
Requires-Dist: SQLAlchemy>=2.0.0; extra == "db"
Requires-Dist: asyncpg>=0.29.0; extra == "db"
Requires-Dist: psycopg2-binary>=2.9.0; extra == "db"
Requires-Dist: alembic>=1.13.0; extra == "db"
Requires-Dist: redis>=5.0.0; extra == "db"
Provides-Extra: security
Requires-Dist: redis>=5.0.0; extra == "security"
Requires-Dist: SQLAlchemy>=2.0.0; extra == "security"
Requires-Dist: cryptography>=41.0.0; extra == "security"
Requires-Dist: PyJWT>=2.8.0; extra == "security"
Requires-Dist: bcrypt>=4.0.0; extra == "security"
Provides-Extra: api
Requires-Dist: fastapi>=0.109.0; extra == "api"
Requires-Dist: uvicorn[standard]>=0.27.0; extra == "api"
Requires-Dist: python-multipart>=0.0.6; extra == "api"
Requires-Dist: email-validator>=2.1.0; extra == "api"
Provides-Extra: rag
Requires-Dist: sentence-transformers>=2.2.2; extra == "rag"
Requires-Dist: FlagEmbedding>=1.3.5; extra == "rag"
Requires-Dist: pillow>=10.0.0; extra == "rag"
Requires-Dist: psutil>=5.9.0; extra == "rag"
Requires-Dist: unstructured[all-docs]>=0.10.0; extra == "rag"
Requires-Dist: python-magic>=0.4.27; extra == "rag"
Requires-Dist: pytesseract>=0.3.10; extra == "rag"
Requires-Dist: pgvector>=0.2.5; extra == "rag"
Requires-Dist: qdrant-client>=1.7.0; extra == "rag"
Provides-Extra: web
Requires-Dist: ddgs>=9.0.0; extra == "web"
Provides-Extra: vlm
Requires-Dist: pillow>=10.0.0; extra == "vlm"
Requires-Dist: numpy>=1.24.0; extra == "vlm"
Requires-Dist: ollama>=0.1.0; extra == "vlm"
Provides-Extra: all
Requires-Dist: soundfile>=0.12.1; extra == "all"
Requires-Dist: scipy>=1.11.0; extra == "all"
Requires-Dist: webrtcvad>=2.0.10; extra == "all"
Requires-Dist: numpy>=1.24.0; extra == "all"
Requires-Dist: elevenlabs>=0.2.0; extra == "all"
Requires-Dist: yt-dlp>=2023.11.0; extra == "all"
Requires-Dist: pydub>=0.25.1; extra == "all"
Requires-Dist: transformers>=4.36.0; extra == "all"
Requires-Dist: accelerate>=0.25.0; extra == "all"
Requires-Dist: torch>=2.1.0; extra == "all"
Requires-Dist: ctranslate2>=4.0.0; extra == "all"
Requires-Dist: faster-whisper>=1.0.0; extra == "all"
Requires-Dist: aiohttp>=3.9.0; extra == "all"
Requires-Dist: google-generativeai>=0.3.0; extra == "all"
Requires-Dist: google-genai>=0.3.0; extra == "all"
Requires-Dist: openai>=1.3.0; extra == "all"
Requires-Dist: anthropic>=0.18.0; extra == "all"
Requires-Dist: tiktoken>=0.5.0; extra == "all"
Requires-Dist: livekit>=0.11.0; extra == "all"
Requires-Dist: livekit-api>=0.4.0; extra == "all"
Requires-Dist: livekit-agents>=0.7.0; extra == "all"
Requires-Dist: livekit-plugins-silero>=0.6.0; extra == "all"
Requires-Dist: livekit-plugins-elevenlabs>=1.3.0; extra == "all"
Requires-Dist: livekit-plugins-turn-detector>=1.3.0; extra == "all"
Requires-Dist: livekit-plugins-noise-cancellation>=0.2.0; extra == "all"
Requires-Dist: sounddevice>=0.5.0; extra == "all"
Requires-Dist: SQLAlchemy>=2.0.0; extra == "all"
Requires-Dist: asyncpg>=0.29.0; extra == "all"
Requires-Dist: psycopg2-binary>=2.9.0; extra == "all"
Requires-Dist: alembic>=1.13.0; extra == "all"
Requires-Dist: redis>=5.0.0; extra == "all"
Requires-Dist: cryptography>=41.0.0; extra == "all"
Requires-Dist: PyJWT>=2.8.0; extra == "all"
Requires-Dist: bcrypt>=4.0.0; extra == "all"
Requires-Dist: fastapi>=0.109.0; extra == "all"
Requires-Dist: uvicorn[standard]>=0.27.0; extra == "all"
Requires-Dist: python-multipart>=0.0.6; extra == "all"
Requires-Dist: email-validator>=2.1.0; extra == "all"
Requires-Dist: ddgs>=9.0.0; extra == "all"
Requires-Dist: opencv-python>=4.8.0; extra == "all"
Requires-Dist: scikit-image>=0.21.0; extra == "all"
Requires-Dist: pdf2image>=1.16.3; extra == "all"
Requires-Dist: python-docx>=1.1.0; extra == "all"
Requires-Dist: einops>=0.6.1; extra == "all"
Requires-Dist: qwen-vl-utils>=0.0.4; extra == "all"
Requires-Dist: pgvector>=0.2.5; extra == "all"
Requires-Dist: qdrant-client>=1.7.0; extra == "all"
Requires-Dist: ollama>=0.1.0; extra == "all"
Provides-Extra: dev
Requires-Dist: pytest>=7.4.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.23.0; extra == "dev"
Requires-Dist: pytest-cov>=4.1.0; extra == "dev"
Requires-Dist: httpx>=0.25.0; extra == "dev"
Requires-Dist: mypy>=1.7.0; extra == "dev"
Requires-Dist: ruff>=0.1.0; extra == "dev"
Dynamic: license-file

# Narrative AI SDK

---

## 🔑 LLM Engine (`nai.llm`)

### `generate()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Standard text generation | `prompt`, `model`, `max_tokens` | `LLMResponse` |
```python
import narrative_ai as nai
import asyncio
async def main():
    nai.llm.set_api_key("key", provider="openai")
    res = await nai.llm.generate("Hi")
    print(res.text)
asyncio.run(main())
```

### `generate_stream()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Streaming responses | `prompt`, `model` | `AsyncIterator` |
```python
import narrative_ai as nai
import asyncio
async def main():
    nai.llm.set_api_key("key", provider="openai")
    async for chunk in nai.llm.generate_stream("Hi"):
        print(chunk)
asyncio.run(main())
```

### `set_api_key()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Set API key | `api_key`, `provider` | `None` |
```python
import narrative_ai as nai
nai.llm.set_api_key("key", "openai")
```

### `set_llm_provider()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Change provider | `provider` | `None` |
```python
import narrative_ai as nai
nai.llm.set_llm_provider("anthropic")
```

### `set_service_url()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Set base URL | `url` | `None` |
```python
import narrative_ai as nai
nai.llm.set_service_url("https://api.openai.com/v1")
```

---

## 🎙️ STT Engine (`nai.stt`)

### `transcribe()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Audio to text | `audio_path`, `language` | `STTResult` |
```python
import narrative_ai as nai
import asyncio
async def main():
    nai.stt.set_api_key("key", "elevenlabs")
    res = await nai.stt.transcribe("a.mp3")
    print(res.text)
asyncio.run(main())
```

### `stream_transcribe()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Real-time transcribing | `audio_stream` | `AsyncIterator` |
```python
import narrative_ai as nai
import asyncio
async def main():
    async for t in nai.stt.stream_transcribe(stream):
        print(t.text)
asyncio.run(main())
```

### `set_api_key()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Set STT key | `key`, `provider` | `None` |
```python
import narrative_ai as nai
nai.stt.set_api_key("key", "elevenlabs")
```

---

## 🔊 TTS Engine (`nai.tts`)

### `synthesize()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Text to audio | `text`, `voice` | `str` (Path) |
```python
import narrative_ai as nai
import asyncio
async def main():
    nai.tts.set_api_key("key", "openai")
    path = await nai.tts.synthesize("Hi")
    print(path)
asyncio.run(main())
```

### `stream_synthesize()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Stream audio bytes | `text`, `voice` | `AsyncIterator` |
```python
import narrative_ai as nai
import asyncio
async def main():
    async for chunk in nai.tts.stream_synthesize("Hi"):
        print(len(chunk))
asyncio.run(main())
```

---

## 📚 RAG Engine (`nai.rag`)

### `remember()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Index document | `doc`, `doc_id` | `bool` |
```python
import narrative_ai as nai
import asyncio
async def main():
    doc = await nai.input_processor.process("f.pdf")
    await nai.rag.remember(doc, "id_1")
asyncio.run(main())
```

### `recall()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Search context | `query`, `top_k` | `RichContext` |
```python
import narrative_ai as nai
import asyncio
async def main():
    res = await nai.rag.recall("query")
    print(res.formatted_text)
asyncio.run(main())
```

### `forget()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Delete doc | `doc_id` | `bool` |
```python
import narrative_ai as nai
import asyncio
async def main():
    await nai.rag.forget("id_1")
asyncio.run(main())
```

### `clear_memory()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Wipe vector store | `None` | `bool` |
```python
import narrative_ai as nai
import asyncio
async def main():
    await nai.rag.clear_memory()
asyncio.run(main())
```

---

## 👁️ OCR Engine (`nai.ocr`)

### `process_image()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Image to text | `image_path` | `OCRResult` |
```python
import narrative_ai as nai
import asyncio
async def main():
    res = await nai.ocr.process_image("i.jpg")
    print(res.text)
asyncio.run(main())
```

### `process_pdf()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| PDF to text | `pdf_path` | `OCRResult` |
```python
import narrative_ai as nai
import asyncio
async def main():
    res = await nai.ocr.process_pdf("d.pdf")
    print(res.text)
asyncio.run(main())
```

---

## 🛠️ Input Processor (`nai.input_processor`)

### `process()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Smart processing | `source` | `StructuredDocument` |
```python
import narrative_ai as nai
import asyncio
async def main():
    doc = await nai.input_processor.process("data.zip")
    print(doc.text)
asyncio.run(main())
```

### `process_batch()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Batch process | `list` | `List[Doc]` |
```python
import narrative_ai as nai
import asyncio
async def main():
    docs = await nai.input_processor.process_batch(["f1", "f2"])
asyncio.run(main())
```

### `process_audio()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Explicit audio | `path` | `Doc` |
```python
import narrative_ai as nai
import asyncio
async def main():
    doc = await nai.input_processor.process_audio("a.wav")
asyncio.run(main())
```

### `process_document()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Explicit PDF/DOCX | `path` | `Doc` |
```python
import narrative_ai as nai
import asyncio
async def main():
    doc = await nai.input_processor.process_document("d.pdf")
asyncio.run(main())
```

---

## 🤖 Voice Mode (`nai.voice_mode`)

### `start_agent()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Run agent loop | `None` | `None` |
```python
import narrative_ai as nai
nai.voice_mode.start_agent()
```

### `set_livekit_config()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Set connection | `url`, `key`, `secret` | `None` |
```python
import narrative_ai as nai
nai.voice_mode.set_livekit_config("url", "key", "secret")
```

### `set_agent_name()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Set name | `name` | `None` |
```python
import narrative_ai as nai
nai.voice_mode.set_agent_name("Assistant")
```

---

## 🔍 Web Intelligence (`nai.web_intel`)

### `search()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Real-time search | `query` | `WebResult` |
```python
import narrative_ai as nai
import asyncio
async def main():
    res = await nai.web_intel.search("AI")
asyncio.run(main())
```

### `research()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Deep report | `topic` | `str` |
```python
import narrative_ai as nai
import asyncio
async def main():
    report = await nai.web_intel.research("Topic")
asyncio.run(main())
```

---

## 🎨 VLM Engine (`nai.vlm`)

### `analyze_image()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Image reasoning | `image`, `prompt` | `VLMResponse` |
```python
import narrative_ai as nai
import asyncio
async def main():
    res = await nai.vlm.analyze_image("i.jpg", "What is this?")
asyncio.run(main())
```

### `chat_with_image()`
| Description | Inputs | Returns |
| :--- | :--- | :--- |
| Multi-turn chat | `image`, `history` | `VLMResponse` |
```python
import narrative_ai as nai
import asyncio
async def main():
    res = await nai.vlm.chat_with_image("i.jpg", history=[])
asyncio.run(main())
```

---

## License
MIT License.
