Metadata-Version: 2.4
Name: xorfice
Version: 0.1.43
Summary: SOTA Omni-Modal Personal AI Orchestrator & Engine
Author-email: Backup-bdg <contact@xoron.dev>
Project-URL: Homepage, https://huggingface.co/Backup-bdg/Xoron-Dev-MultiMoe
Project-URL: Documentation, http://localhost:8000
Classifier: Programming Language :: Python :: 3
Classifier: License :: OSI Approved :: MIT License
Classifier: Operating System :: OS Independent
Requires-Python: >=3.9
Description-Content-Type: text/markdown
Requires-Dist: torch>=2.0.0
Requires-Dist: triton
Requires-Dist: transformers
Requires-Dist: fastapi
Requires-Dist: uvicorn
Requires-Dist: pydantic
Requires-Dist: safetensors
Requires-Dist: hf_transfer
Requires-Dist: huggingface_hub
Requires-Dist: rich
Requires-Dist: readchar

# 📦 Xorfice: The SOTA Omni-Modal Intelligence Engine

[![PyPI version](https://img.shields.io/pypi/v/xorfice.svg)](https://pypi.org/project/xorfice/)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

**Xorfice** is the official, high-performance orchestration engine for **Xoron-Dev**. It provides a secure, production-grade interface for the world's most advanced **5B Sparse Mixture of Experts (MoE)** model, enabling seamless reasoning across Text, Vision, Video, and Audio.

---

## 🚀 Why Xorfice?

Xorfice is designed for developers who demand state-of-the-art multimodal performance without the complexity of manual model management.

### 🛡️ Enterprise-Grade Trust
- **Official Interface:** Developed and maintained by the Backup-bdg team.
- **Privacy First:** All multimodal processing (Vision, Audio, Video) occurs locally on your hardware.
- **Validated Weights:** Automatic checksum verification for all model weights downloaded from HuggingFace.

### ✨ SOTA Features
- **🧠 Sparse MoE Orchestration:** Native support for 8-expert routing with **Deep Expert** invocation (depths up to 5) for complex reasoning tokens.
- **⚡ Fast Ponder Latents:** Attention-free depth-3 reasoning block for 10x-20x thought acceleration.
- **👁️ SigLIP-2 & TiTok Vision:** Built on SigLIP-2 for superior zero-shot alignment and 2.2x token compression.
- **🎬 VidTok Video Logic:** 3D volumetric compression for understanding motion and causality.
- **🎙️ Raw PCM Audio:** Direct Conformer-based ingestion of 16kHz audio for sub-200ms Speech-to-Speech latency.
- **🎨 Creative Power:** Integrated pipelines for **Text-to-Video (T2V)** and **Image-to-Video (I2V)**.

---

## 🛠️ Installation

Simply install the latest stable version from PyPI:

```bash
pip install xorfice
```

---

## 💡 Quick Start

Get up and running with the `XoronEngine` in seconds.

```python
from xorfice import XoronEngine

# The engine automatically handles hardware optimization (CUDA/VRAM)
# Weights are verified and cached from Backup-bdg/Xoron-Dev-MultiMoe
engine = XoronEngine(model_path="Backup-bdg/Xoron-Dev-MultiMoe")

# Multimodal Reasoning
response = engine.generate(
    prompt="Analyze the speaker's emotions in this video.",
    videos="https://example.com/interview.mp4",
    audios="https://example.com/interview_audio.wav"
)

print(f"Xoron: {response['text']}")
```

---

## ⚙️ Performance & Optimization
Xorfice includes industry-leading optimization techniques:
- **Expert Offloading:** Run 5B+ parameter models on 8GB VRAM consumer GPUs.
- **Paged KV Cache:** Massive throughput for long-context (128K) reasoning.
- **Adaptive Precision:** Automatic switching between FP16 and BF16 based on hardware capability.

---

## 🤝 Community & Support
- **Model Hub:** [Backup-bdg on HuggingFace](https://huggingface.co/Backup-bdg/Xoron-Dev-MultiMoe)
- **Documentation:** Accessible locally at [http://localhost:8000](http://localhost:8000) (Documentation Tab) while the engine is running.

*Xorfice: Powering the next generation of omni-modal agents.*
