Metadata-Version: 2.4
Name: neuroloop-py
Version: 0.0.1
Summary: NeuroLoop™ – EXG-aware AI agent (Python edition using aider/litellm)
Project-URL: Homepage, https://neuroloop.io
Project-URL: Repository, https://github.com/NeuroSkill-com/neuroloop
Author-email: NeuroSkill team <hello@neuroskill.com>
License: GPL-3.0-only
Keywords: agent,ai,aider,bci,eeg,exg,litellm,neurotech
Requires-Python: >=3.12
Requires-Dist: httpx>=0.24.0
Requires-Dist: litellm>=1.50.0
Requires-Dist: neuroskill-dev>=0.0.1
Requires-Dist: prompt-toolkit>=3.0.0
Requires-Dist: rich>=13.0.0
Provides-Extra: dev
Requires-Dist: pyright>=1.1; extra == 'dev'
Requires-Dist: pytest-asyncio>=0.23; extra == 'dev'
Requires-Dist: pytest>=8.0; extra == 'dev'
Description-Content-Type: text/markdown

# neuroloop-py

**NeuroLoop™ – EXG-aware AI agent (Python edition)**

A Python clone of [neuroloop](../neuroloop) using **aider**'s LLM infrastructure
([litellm](https://github.com/BerriAI/litellm)) instead of the pi coding agent framework.

---

## What it does

neuroloop-py is an EXG-aware conversational AI agent that:

- **Reads your brainwaves** before every turn via `neuroskill status`
- **Injects your live mental state** into the system prompt so the AI can respond with
  full awareness of how you actually feel — cognitively, emotionally, somatically
- **Auto-labels notable moments** (awe, grief, deep focus, moral clarity, etc.) as
  permanent EXG annotations
- **Runs guided protocols** (breathing, meditation, grounding, somatic scans, etc.)
  step by step with OS notifications and EXG timestamps
- **Searches the web**, reads URLs, and maintains **persistent memory** across sessions
- **Pre-warms the compare cache** so session comparisons are instant when you ask

---

## Architecture

```
neuroloop/
├── main.py          Entry point — model selection, CLI args, asyncio.run()
├── agent.py         NeuroloopAgent — main loop, before_agent_start hook, tool dispatch
├── memory.py        ~/.neuroskill/memory.md — read/write persistent memory
├── prompts.py       STATUS_PROMPT + build_system_prompt()
├── neuroskill/
│   ├── run.py       run_neuroskill() — subprocess executor (npx neuroskill ...)
│   ├── signals.py   detect_signals() — 35+ regex-based domain signal detectors
│   └── context.py   select_contextual_data() — parallel neuroskill queries
└── tools/
    ├── web_fetch.py  web_fetch tool — URL → plain text
    ├── web_search.py web_search tool — DuckDuckGo Lite (no API key)
    └── protocol.py   run_protocol tool — timed step execution + EXG labelling
```

### vs. the TypeScript original

| TypeScript (neuroloop)         | Python (neuroloop-py)                    |
|-------------------------------|------------------------------------------|
| pi coding agent framework     | aider / litellm                          |
| pi `ExtensionAPI`             | `NeuroloopAgent` class                   |
| `before_agent_start` hook     | `agent.before_agent_start()` async method |
| pi `registerTool`             | `ALL_TOOLS` list (OpenAI function schema) |
| pi `InteractiveMode`          | `asyncio` + `rich` console REPL          |
| pi TUI (custom header/footer) | `rich` Markdown + rule separators        |
| WebSocket EXG live panel      | Per-turn `neuroskill status` query       |
| `@sinclair/typebox` schemas   | Plain Python dicts (OpenAI schema)       |
| TypeScript `zod`              | Python type hints                        |

> **Why litellm?** Aider uses litellm internally. Using it directly gives us the same
> multi-provider support (Anthropic, OpenAI, Gemini, Ollama, …) without requiring a
> full aider installation.

---

## Installation

```bash
cd /agent/ns/neuroloop-py
pip install -r requirements.txt
# or: pip install .
```

Requires Python ≥ 3.12.

---

## Usage

```bash
# Interactive mode
python -m neuroloop.main

# With a specific model
python -m neuroloop.main --model claude-3-5-sonnet-20241022

# With an initial message
python -m neuroloop.main "How is my focus today?"

# Via the console script (after pip install)
neuroloop-py
neuroloop-py --model gpt-4o
```

### Model selection (priority order)
1. `--model MODEL` CLI flag
2. `NEUROLOOP_MODEL` environment variable
3. Auto-detect: `ANTHROPIC_API_KEY` → claude, `OPENAI_API_KEY` → gpt-4o,
   `GEMINI_API_KEY` → gemini, local Ollama → first available model
4. Fallback: `claude-3-5-sonnet-20241022`

---

## Commands

| Command | Description |
|---------|-------------|
| `/exg` | Show live EXG snapshot |
| `/exg on` / `/exg off` | Toggle EXG display |
| `/neuro <cmd> [args]` | Run a neuroskill subcommand |
| `/memory` | Show agent memory |
| `/help` | Show all commands |
| `/quit` | Exit |

---

## Tools available to the AI

| Tool | Description |
|------|-------------|
| `web_fetch` | Fetch any URL → plain text |
| `web_search` | DuckDuckGo Lite search (no API key) |
| `memory_read` | Read `~/.neuroskill/memory.md` |
| `memory_write` | Write / append to memory |
| `neuroskill_label` | Create a timestamped EXG annotation |
| `neuroskill_run` | Run any neuroskill subcommand |
| `prewarm` | Start background `neuroskill compare` cache build |
| `run_protocol` | Execute a timed multi-step guided protocol |

---

## Skills

The `skills/` directory contains the same SKILL.md files as the TypeScript version.
They are loaded on-demand based on context signals detected in the user's prompt
(protocols, sleep, HRV, etc.).

---

## Requirements

- Python ≥ 3.12
- `neuroskill` npm package (`npx neuroskill status`)
- At least one LLM API key (`ANTHROPIC_API_KEY`, `OPENAI_API_KEY`, etc.)
  or a running Ollama instance
