Metadata-Version: 2.4
Name: anyrobo
Version: 0.2.1
Summary: A local-first voice AI assistant framework. Create your own JARVIS.
Project-URL: Homepage, https://github.com/vietanhdev/anyrobo
Project-URL: Repository, https://github.com/vietanhdev/anyrobo
Project-URL: Documentation, https://nrl.ai
Project-URL: Issues, https://github.com/vietanhdev/anyrobo/issues
Author-email: Viet-Anh Nguyen <vietanh.dev@gmail.com>
License-Expression: MIT
License-File: LICENSE
Keywords: ai,assistant,llm,local,offline,stt,tts,voice
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Multimedia :: Sound/Audio :: Speech
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.8
Requires-Dist: click>=8.0
Requires-Dist: pyyaml>=6.0
Provides-Extra: all
Requires-Dist: elevenlabs>=0.2.0; extra == 'all'
Requires-Dist: ollama>=0.1.0; extra == 'all'
Requires-Dist: openai-whisper>=20230314; extra == 'all'
Requires-Dist: openai>=1.0; extra == 'all'
Requires-Dist: pyttsx3>=2.90; extra == 'all'
Requires-Dist: vosk>=0.3.45; extra == 'all'
Provides-Extra: dev
Requires-Dist: pytest-cov>=4.0; extra == 'dev'
Requires-Dist: pytest>=7.0; extra == 'dev'
Provides-Extra: elevenlabs
Requires-Dist: elevenlabs>=0.2.0; extra == 'elevenlabs'
Provides-Extra: ollama
Requires-Dist: ollama>=0.1.0; extra == 'ollama'
Provides-Extra: openai
Requires-Dist: openai>=1.0; extra == 'openai'
Provides-Extra: pyttsx3
Requires-Dist: pyttsx3>=2.90; extra == 'pyttsx3'
Provides-Extra: vosk
Requires-Dist: vosk>=0.3.45; extra == 'vosk'
Provides-Extra: whisper
Requires-Dist: openai-whisper>=20230314; extra == 'whisper'
Description-Content-Type: text/markdown

<h1 align="center">anyrobo</h1>
<p align="center"><em>Build your own JARVIS — local-first voice AI</em></p>

![PyPI](https://img.shields.io/pypi/v/anyrobo)
![Python](https://img.shields.io/pypi/pyversions/anyrobo)
![License](https://img.shields.io/pypi/l/anyrobo)

**A local-first voice AI assistant framework.** Create your own JARVIS with pluggable TTS, STT, and LLM backends. Works completely offline with local models.

Built by [Viet-Anh Nguyen](https://github.com/vietanhdev) at [NRL.ai](https://www.nrl.ai).

## Installation

```bash
pip install anyrobo
```

With all optional backends:

```bash
pip install anyrobo[all]
```

## Quick Start

```python
import anyrobo

# Create an assistant with a personality
assistant = anyrobo.Assistant(personality="jarvis")

# Add custom tools
def get_weather(location: str) -> str:
    """Get the current weather for a location."""
    return f"Sunny in {location}"

assistant.add_tool(get_weather, description="Get weather for a location")

# Chat with the assistant
response = assistant.chat("What's the weather in Paris?")
print(response)

# Stream responses
for chunk in assistant.chat("Tell me a story", stream=True):
    print(chunk, end="")
```

## Personalities

```python
jarvis = anyrobo.Personality.builtin("jarvis")   # Helpful butler
glados = anyrobo.Personality.builtin("glados")   # Sarcastic AI
default = anyrobo.Personality.builtin("assistant")  # Neutral assistant
```

## Memory

```python
memory = anyrobo.ConversationMemory(max_messages=100, persistence_path="memory.json")
memory.add("user", "Hello")
memory.add("assistant", "Hi there!")
results = memory.search("hello")
```

## Events

```python
@assistant.on("tool_call")
def log_tool(name, args):
    print(f"Calling {name} with {args}")
```

## Configuration

Create a YAML config file:

```yaml
personality: jarvis
llm:
  backend: ollama
  model: llama3
stt:
  backend: whisper
  model_size: base
tts:
  backend: pyttsx3
```

```python
config = anyrobo.AssistantConfig.from_yaml("config.yaml")
assistant = anyrobo.Assistant(config=config)
```

## CLI

```bash
anyrobo chat          # Interactive chat
anyrobo listen        # Voice input mode
anyrobo config show   # Show current config
```

## Backends

### LLM
- **Ollama** (default) - Local LLM inference
- **OpenAI** - OpenAI API

### Speech-to-Text
- **Whisper** (default) - OpenAI Whisper local model
- **Vosk** - Offline STT

### Text-to-Speech
- **pyttsx3** (default) - Offline TTS
- **ElevenLabs** - Cloud TTS API

## License

MIT License - see [LICENSE](LICENSE) for details.

## Author

[Viet-Anh Nguyen](https://nrl.ai) ([@vietanhdev](https://github.com/vietanhdev))
