Metadata-Version: 2.4
Name: phoenix-os
Version: 0.5.4
Summary: An AI that feels like it knows you -- cognitive architecture with memory as the nervous system.
Author-email: harshalmore31 <harshalmore2468@gmail.com>
License: Apache-2.0
Project-URL: Homepage, https://github.com/harshalmore31/phoenix-os
Project-URL: Repository, https://github.com/harshalmore31/phoenix-os
Project-URL: Issues, https://github.com/harshalmore31/phoenix-os/issues
Project-URL: Changelog, https://github.com/harshalmore31/phoenix-os/blob/main/CHANGELOG.md
Keywords: ai,agent,llm,memory,cognitive-architecture,personal-assistant
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.11
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: pydantic-ai>=0.0.20
Requires-Dist: pydantic>=2.0
Requires-Dist: pyyaml>=6.0
Requires-Dist: numpy>=1.24
Requires-Dist: keyring>=24.0
Requires-Dist: rich>=13.0
Requires-Dist: textual>=8.1.1
Requires-Dist: textual-speedups>=0.2.1
Requires-Dist: pyperclip>=1.9
Requires-Dist: httpx>=0.27
Requires-Dist: python-dotenv>=1.0
Requires-Dist: aiosqlite>=0.19
Requires-Dist: sentence-transformers>=5.0
Requires-Dist: transformers>=4.56
Requires-Dist: huggingface_hub>=0.24
Requires-Dist: fastembed>=0.3
Provides-Extra: voice
Requires-Dist: pvporcupine>=3.0; extra == "voice"
Requires-Dist: sounddevice>=0.4; extra == "voice"
Requires-Dist: webrtcvad>=2.0; extra == "voice"
Requires-Dist: noisereduce>=3.0; extra == "voice"
Requires-Dist: faster-whisper>=1.0; extra == "voice"
Provides-Extra: browser
Requires-Dist: browser-use>=0.1; extra == "browser"
Requires-Dist: playwright>=1.40; extra == "browser"
Requires-Dist: playwright-stealth>=1.0; extra == "browser"
Requires-Dist: langchain-openai>=0.1; extra == "browser"
Provides-Extra: pdf
Requires-Dist: pymupdf>=1.24; extra == "pdf"
Provides-Extra: learning
Requires-Dist: optuna>=4.0; extra == "learning"
Provides-Extra: dev
Requires-Dist: ruff>=0.15; extra == "dev"
Requires-Dist: pyright>=1.1.400; extra == "dev"
Requires-Dist: pre-commit>=4.0; extra == "dev"
Requires-Dist: pytest>=8.0; extra == "dev"
Requires-Dist: pytest-asyncio>=0.24; extra == "dev"
Provides-Extra: all
Requires-Dist: phoenix-os[browser,learning,pdf,voice]; extra == "all"
Dynamic: license-file

# Phoenix

[![CI](https://github.com/harshalmore31/phoenix-os/actions/workflows/ci.yml/badge.svg)](https://github.com/harshalmore31/phoenix-os/actions/workflows/ci.yml)
[![PyPI](https://img.shields.io/pypi/v/phoenix-os.svg)](https://pypi.org/project/phoenix-os/)
[![Python](https://img.shields.io/pypi/pyversions/phoenix-os.svg)](https://pypi.org/project/phoenix-os/)
[![License](https://img.shields.io/badge/license-Apache%202.0-blue.svg)](./LICENSE)

**An AI that feels like it knows you.**

Not a chatbot. Not a memory framework. Not a coding agent. Phoenix is a cognitive architecture where memory is the nervous system -- the AI that actually remembers who you are, notices when you contradict yourself, and picks up where you left off.

> Read [`principle.md`](./principle.md) and [`vision.md`](./vision.md) before using, contributing, or forking. They are the founding documents. They are not optional.

---

## What Phoenix Does That Other Agents Do Not

Five behaviors that no other AI agent ships reliably today:

- **Resumption.** Quit mid-task. Reopen days later. Phoenix picks up exactly where things were -- because it understood what was happening, not just recorded it.
- **Correction.** Phoenix notices when you contradict yourself and surfaces it respectfully, because the memory gate checks new facts against old ones before storing.
- **Ambience.** Phoenix speaks up at the right moment without being asked, within a token budget. Not spammy notifications -- genuine noticing.
- **Recognition.** Phoenix says "you mentioned X" (recognition) instead of "I found a relevant memory" (retrieval). Small phrasing distinction. Massive experiential difference.
- **Consolidation.** Over weeks, Phoenix notices patterns you did not -- that you ship more on Tuesdays, that you get stuck in the same refactor loop -- and brings them up when useful.

These are not features. They are behaviors that emerge from the architecture. No amount of wrapping OpenAI in a prompt will produce them.

---

## What Phoenix Is Not

- **Not a memory framework.** mem0, Letta, Zep, LangChain memory are storage with retrieval. Phoenix is cognitive architecture.
- **Not a Swiss Army knife.** If you need 25+ messaging channels and 50+ LLM providers, use OpenClaw. Phoenix is a scalpel.
- **Not a Claude Code clone.** Claude Code is a coding agent. Phoenix is a personal agent that happens to be good at coding.
- **Not design-by-committee.** Contributions welcome. Generic contributions rejected.

---

## Install

```bash
# Recommended: install with uv (fast)
uv pip install phoenix-os

# Or with pip
pip install phoenix-os
```

From source:

```bash
git clone https://github.com/harshalmore31/phoenix-os.git
cd phoenix-os
uv pip install -e .   # or: pip install -e .
```

On first run, the setup wizard guides you through model selection, API key entry, and Hugging Face authentication for the embedding model. Keys are stored in your OS keychain (macOS Keychain / Windows Credential Locker / Linux Secret Service), not in plaintext.

```bash
phoenix              # first run triggers setup wizard
phoenix --setup      # re-run setup anytime
```

**Default embeddings: google/embeddinggemma-300m** (~600MB, multilingual). The setup wizard opens the license page and token page in your browser, then verifies the token. If you cannot complete HF auth, fall back to English-only MiniLM via `phoenix --embedder fastembed`.

Optional extras:

```bash
uv pip install "phoenix-os[voice]"       # Wake word + STT + TTS
uv pip install "phoenix-os[browser]"     # Web automation
uv pip install "phoenix-os[pdf]"         # PDF text extraction
uv pip install "phoenix-os[all]"         # Everything
```

---

## Quick Start

```bash
# Set your model and API key
export PHOENIX_MODEL=anthropic:claude-sonnet-4-20250514
export ANTHROPIC_API_KEY=sk-...

# Run
phoenix
```

---

## Supported Models

Phoenix works with any model provider supported by pydantic-ai:

| Provider | Example | Env Variable |
|----------|---------|-------------|
| Anthropic | `anthropic:claude-sonnet-4-20250514` | `ANTHROPIC_API_KEY` |
| OpenAI | `openai:gpt-4o` | `OPENAI_API_KEY` |
| Google Gemini | `google-gla:gemini-2.5-flash` | `GEMINI_API_KEY` |
| Groq | `groq:llama-3.3-70b-versatile` | `GROQ_API_KEY` |
| Mistral | `mistral:mistral-large-latest` | `MISTRAL_API_KEY` |
| DeepSeek | `deepseek:deepseek-chat` | `DEEPSEEK_API_KEY` |
| Ollama (local, free) | `ollama:llama3:8b` | None needed |
| OpenRouter | `openrouter:...` | `OPENROUTER_API_KEY` |

```bash
phoenix --model anthropic:claude-sonnet-4-20250514
phoenix --model ollama:llama3:8b         # local, free
phoenix --model groq:llama-3.3-70b-versatile   # fast inference
```

---

## Usage

```
phoenix                     # Start new session
phoenix --continue          # Resume last session
phoenix --resume SESSION_ID # Resume specific session
phoenix --sessions          # List saved sessions
phoenix --model openai:gpt-4o  # Override model
phoenix --auto              # Auto-approve file edits
phoenix --yolo              # Auto-approve everything
phoenix --voice             # Enable voice I/O
```

### Shell Commands

| Command | Description |
|---------|-------------|
| `/help` | Show all commands |
| `/tools` | List available abilities |
| `/agents` | List available agents |
| `/memory` | Show memory stats |
| `/model` | Show or switch model |
| `/mode` | Show or switch approval mode (safe/auto/yolo) |
| `/cost` | Show token usage |
| `/sessions` | List saved sessions |
| `/clear` | Clear conversation history |
| `/bye` | Exit |

---

## Architecture

Memory is the nervous system. Every other module exists to support it.

```
phoenix/
    core/           # Engine -- builds agents, routes delegation, discovers plugins
    memory/         # What Phoenix remembers -- cognitive pipeline (the spine)
    abilities/      # What Phoenix can do -- each file is one pluggable capability
    personality/    # Who Phoenix is -- YAML agent definitions + prompt generation
    ambient/        # What Phoenix notices -- background intelligence, token-budgeted
    voice/          # How Phoenix speaks/listens -- wake word, STT, TTS (optional)
    shell/          # How users interact -- CLI loop, commands, sessions
    hooks/          # How Phoenix extends -- lifecycle event hooks
    config/         # How Phoenix is configured -- env, models, paths
```

### Adding an Ability

Drop a Python file with the `@ability` decorator. No registration. No manifest. No plugin SDK:

```python
# abilities/weather.py
from phoenix.abilities import ability

@ability(name="weather", description="Get current weather")
async def weather(city: str) -> str:
    return f"Weather in {city}: sunny, 22C"
```

Add `weather` to any agent's YAML abilities list. Done. Phoenix auto-discovers it at startup.

**Three tiers:**

1. **Simple** -- function with `@ability`. No Phoenix knowledge needed.
2. **With approval** -- same function. Interceptor handles permissions externally.
3. **With context** -- add `ctx: PhoenixContext` for memory, config, logging.

```python
from phoenix.abilities import ability, PhoenixContext

@ability(name="smart_search", description="Search informed by memory")
async def smart_search(ctx: PhoenixContext, query: str) -> str:
    past = ctx.memory.pre_turn(query) if ctx.memory else ""
    return f"Results for: {query} (context: {past[:100]})"
```

### Adding an Agent

Drop a YAML file. No Python changes:

```yaml
# personality/agents/researcher.yaml
name: researcher
role: "Deep research and analysis"
tone:
  - thorough
  - analytical

rules:
  - "Always cite sources"
  - "Cross-reference multiple sources"

abilities:
  - read_file
  - grep
  - bash

meta:
  category: community
```

Phoenix picks it up at next startup.

---

## Memory: Cognition, Not Storage

Most "AI with memory" products are storage with retrieval: save fact, find fact on query match. That is retrieval-augmented generation with extra steps. Anyone can build it in a weekend.

Phoenix is different in kind, not degree:

1. **Extract** -- equation-based fact extraction from user input. Declarative signal, semantic scoring, intent detection. **Zero LLM calls.**
2. **Gate** -- three-stage novelty filtering:
   - Duplicate detection
   - Contradiction detection (flag, supersede, or merge)
   - Output-aware gating (refuses to store the AI's own words as user facts)
3. **Store** -- SQLite with graph edges connecting related memories
4. **Recall** -- semantic similarity + memory strength + recency + spreading activation through the graph
5. **Feedback** -- memories the AI actually uses get stronger. Unused memories decay. Your brain does this. mem0 does not.
6. **Consolidate** -- clusters similar memories, adjusts weights, tracks emotional/contextual dimensions

Task-typed embeddings via **EmbeddingGemma-300M** (768-dim, multilingual, 100+ languages): queries, documents, clusters, and symmetric comparisons each use their correct prompt prefix. FastEmbed + MiniLM remains as a smaller English-only fallback, selectable at `phoenix --setup`.

**The recall thresholds are learnable.** The `learning/` module runs Bayesian optimization over your own memory corpus with an LLM-as-judge scoring retrieval quality per trial. You do not tune thresholds by hand -- your memory literally teaches the retrieval layer how to recall itself. See [`learning/README.md`](./learning/README.md).

This is not a memory framework you bolt on. It is the architecture the rest of Phoenix is built around.

---

## Ambient Intelligence

A background daemon monitors system state (battery, disk, session length, time-of-day, idle) and nudges when genuinely useful. Zero tokens spent until something is worth saying. Token-budgeted (5000/day with emergency reserve). An LLM judge decides whether silence would be worse than noise before Phoenix speaks.

No other agent has this layer. Most send notifications. Phoenix notices.

---

## Voice

Optional. Say "Phoenix" to activate, speak naturally, get spoken responses. Supports Groq Whisper (cloud STT), Kokoro (local TTS), with macOS `say` as fallback.

```bash
pip install phoenix-os[voice]
phoenix --voice
```

---

## Configuration

Environment variables:

| Variable | Default | Description |
|----------|---------|-------------|
| `PHOENIX_MODEL` | `anthropic:claude-sonnet-4-20250514` | Model to use |
| `PHOENIX_DIR` | `~/.phoenix` | Data directory |
| `PHOENIX_LOG_LEVEL` | `WARNING` | Logging level |
| `ANTHROPIC_API_KEY` | -- | Anthropic API key |
| `OPENAI_API_KEY` | -- | OpenAI API key |
| `GROQ_API_KEY` | -- | Groq API key (for voice STT) |
| `PICOVOICE_ACCESS_KEY` | -- | Picovoice key (for wake word) |

---

## Hooks

Create `hooks_config.json` in your project root for lifecycle event hooks:

```json
{
  "hooks": {
    "pre_turn": [
      {"command": "echo 'User said something'", "action": "log"}
    ],
    "pre_tool": [
      {"match": "bash", "command": "./approve.sh", "action": "approve_or_deny"}
    ]
  }
}
```

Events: `pre_tool`, `post_tool`, `pre_turn`, `post_turn`, `on_error`.

---

## Philosophy

Phoenix is built on two founding documents. Read them before using, contributing, or forking:

- [`principle.md`](./principle.md) -- What Phoenix believes. How decisions get made. The discipline.
- [`vision.md`](./vision.md) -- Where Phoenix is going over ten years. The horizon.

**One-line principle:** Phoenix competes on the ride, not the specs. Taste, coherence, and a singular editorial voice are the only defensible position against Anthropic and OpenAI's infinite engineering budgets.

**One-line vision:** Phoenix is the AI that will know you in ten years, because one person spent ten years refusing to make it anything else.

---

## Contributing

Phoenix is open-source. That does not mean democratic.

Before opening a PR:

1. Read [`principle.md`](./principle.md). If your contribution would fail the "would a committee pick this?" test, do not open the PR.
2. Read [`vision.md`](./vision.md). If your contribution would pull Phoenix away from the horizon, do not open the PR.
3. Issues, discussions, and bug reports are always welcome.
4. New abilities and agents are welcome when they fit the taste of the project.

Generic contributions are rejected, regardless of code quality. This is deliberate. The discipline is the product.

---

## License

Apache 2.0.
