Metadata-Version: 2.4
Name: neonrp
Version: 0.1.9
Summary: Agentic engine for AI role-playing — agents as a game
Author: redoctober
Author-email: nikoloside <nikoloside@gmail.com>
License: MIT
Requires-Python: >=3.10
Requires-Dist: anthropic>=0.40.0
Requires-Dist: click>=8.1.7
Requires-Dist: jsonschema>=4.21.1
Requires-Dist: mcp>=1.0.0
Requires-Dist: openai>=2.16.0
Requires-Dist: pydantic>=2.0.0
Requires-Dist: regex>=2025.11.3
Requires-Dist: textual[syntax]>=0.50.0
Provides-Extra: bridges
Requires-Dist: discord-py>=2.3.0; extra == 'bridges'
Requires-Dist: python-telegram-bot>=21.0; extra == 'bridges'
Provides-Extra: discord
Requires-Dist: discord-py>=2.3.0; extra == 'discord'
Provides-Extra: telegram
Requires-Dist: python-telegram-bot>=21.0; extra == 'telegram'
Description-Content-Type: text/markdown

# NeonRP CLI

NeonRP CLI is a developer-friendly command-line engine for playing and iterating on **file-backed text RPGs**.

## Why NeonRP (vs. a web chat)?
NeonRP is built around **engine-like guarantees**:
- **Resume / Undo / Branch**: checkpointed gameplay you can safely explore.
- **Plan → Diff → Apply**: changes are visible and validated before they land.
- **Append-only events + snapshots**: reproducible and debuggable runs.
- **Minimal-context retrieval**: fast runs even with large worlds.
- **Sandbox & Replay**: isolated experimentation and determinism verification.

## Installation

### Recommended — one-liner (no OS security prompt)

This path downloads the installer through `curl` / `irm` so neither macOS
Gatekeeper nor Windows SmartScreen flags it. Paste into a terminal:

```bash
# macOS / Linux
curl -LsSf https://worldlines.gg/install.sh | sh

# Windows (PowerShell)
irm https://worldlines.gg/install.ps1 | iex
```

Installs `uv` (if missing) then `uv tool install worldlines`. Verify
with `neonrp --help` or just `worldlines`.

### Alternative — double-click launcher

Prefer not to touch a terminal? Download a platform launcher and
double-click it. The macOS app opens Terminal for you, runs the
first-time install (or self-updates on later launches), then hands off
to the TUI. Windows follows the same flow via a batch file.

| Platform | File | Download |
|---|---|---|
| macOS    | `WorldLines.dmg`     | [latest](https://github.com/LudicDynamics/WorldLines/releases/latest/download/WorldLines.dmg) |
| Windows  | `WorldLines.bat`     | [latest](https://github.com/LudicDynamics/WorldLines/releases/latest/download/WorldLines.bat) |

Linux users should use the terminal install path above:

```bash
curl -LsSf https://worldlines.gg/install.sh | sh
```

Heads-up: browser-downloaded launchers may still be tagged as
untrusted. **First run only**, you may need one extra approval step.

<details>
<summary><b>macOS — first-time open</b></summary>

Open the `.dmg`, drag `WorldLines.app` into `Applications`, then launch
it from there.

Because the app is not yet signed / notarized, Gatekeeper may block the
first launch. If that happens, either:

```bash
xattr -d com.apple.quarantine /Applications/WorldLines.app 2>/dev/null || true
```

or **right-click** `WorldLines.app` → **Open** → **Open** again in the
confirmation dialog. macOS remembers this choice for later launches.

Prefer a raw bundle instead of a disk image? Use
[`WorldLines.app.zip`](https://github.com/LudicDynamics/WorldLines/releases/latest/download/WorldLines.app.zip).

</details>

<details>
<summary><b>Windows — first-time open</b></summary>

Double-click `WorldLines.bat`. SmartScreen shows **"Windows protected
your PC"** → click **More info** → **Run anyway**. Windows remembers
this choice.

</details>

After the first run, pin the launcher however you like and treat it like
any other shortcut.

### Developers — editable install

```bash
# From local source (editable install — picks up code changes)
uv tool install -e /path/to/NeonRP

# Verify installation
neonrp --help
```

### Users in China (partial GitHub / Cloudflare connectivity)

Some mainland networks throttle or drop GitHub Release downloads and
Cloudflare-fronted domains. If `curl https://worldlines.gg/install.sh`
times out, try:

```bash
# 1) Fetch install.sh from GitHub directly (sometimes faster than the
#    Cloudflare redirect, depending on ISP routing):
curl -LsSf \
  https://github.com/LudicDynamics/WorldLines/releases/latest/download/install.sh \
  | sh

# 2) Force uv to use a China-hosted PyPI mirror before installing:
export UV_DEFAULT_INDEX=https://pypi.tuna.tsinghua.edu.cn/simple
uv tool install worldlines

#    (alternative mirrors: mirrors.aliyun.com/pypi/simple,
#     mirrors.cloud.tencent.com/pypi/simple)

# 3) Both at once — pass CHINA_MIRROR=1 to install.sh:
CHINA_MIRROR=1 curl -LsSf https://worldlines.gg/install.sh | sh
```

If none of the above work, a VPN is the last resort. Auto-update
(`neonrp self-update`) honors the same `UV_DEFAULT_INDEX` variable, so
exporting it in your shell profile keeps future upgrades on the mirror
too.

## Quick start

```bash
# Just run neonrp to start the TUI (like Claude Code)
neonrp

# Or use subcommands directly
neonrp init
neonrp game new
neonrp run "look around"
neonrp save
neonrp undo
neonrp branch "try-stealing-route"
```

`neonrp init` is enough to start working in TUI build mode. Fresh projects now expose packaged builtin skills automatically, so `/game new` is optional unless you want the default starter world and play-agent scaffold immediately.

## Validation & Indexing (M1)

```bash
# validate game data against schema
neonrp validate [--json]

# build / update entity index
neonrp index build [--json]
neonrp index update [--json]

# search entities
neonrp find "tokyo" --kind town --json

# get ranked context for a query
neonrp context "neon city" --limit 5 --json
```

## Embedding Retrieval (Opt-in)

```json
{
  "embedding": {
    "enabled": true,
    "model_ref": "openai-local-qwen3-embedding-0.6b"
  },
  "models": {
    "openai-local-qwen3-embedding-0.6b": {
      "provider": "openai",
      "base_url": "http://127.0.0.1:1234/v1",
      "model": "text-embedding-qwen3-embedding-0.6b",
      "api_key": "1234"
    }
  }
}
```

- `index build` will precompute vectors when `embedding.enabled=true`.
- `context` / `read_context` use hybrid fuzzy + vector retrieval automatically when enabled.
- If embedding is disabled, retrieval remains fuzzy-only.

## Agents & Controlled Writes (M2)

```bash
# create and manage agents
neonrp agent new narrator
neonrp agent list --json
neonrp agent show narrator --json

# apply a pre-generated proposal
neonrp agent run narrator --proposal changes.json --apply --json

# view agent run logs
neonrp agent logs <run_id>
```

## LLM Integration + Import (M3)

```bash
# LLM-driven agent run (single-shot)
neonrp agent run narrator --query "describe the town square" --provider glm --json
neonrp agent run narrator --query "..." --provider stub --apply --json

# import SillyTavern character card (JSON or PNG)
neonrp import sillytavern-card character.json --json
neonrp import sillytavern-card character.png --json

# import on TUI startup
neonrp tui --import-card character.png --import-id my-character
```

## Sandbox & Replay (M4)

```bash
# save, then create a sandbox from it
neonrp save my-checkpoint
neonrp load          # list loadable snapshots
neonrp sandbox new experiment --from my-checkpoint --switch

# list / switch / drop sandboxes
neonrp sandbox list --json
neonrp sandbox switch main
neonrp sandbox drop experiment --force

# verify replay determinism
neonrp replay verify --from my-checkpoint --json

# checkout a past event into a new sandbox
neonrp replay checkout --from my-checkpoint --to 5 --to-branch rollback --json
```

`save/load` snapshots now include the current branch's session transcript in addition to `game/`, events, and metadata.

## Terminal User Interface (TUI)

NeonRP includes a rich terminal UI for interactive gameplay:

```bash
# launch the TUI
neonrp tui

# launch with specific branch
neonrp tui --branch experiment

# launch with provider override
neonrp tui --provider openai

# continue previous branch session on startup
neonrp tui --continue

# force start a new session on startup (default behavior)
neonrp tui --new

# import a Tavern card before the first turn
neonrp tui --import-card character.png --import-id my-character

# enable verbose diagnostics
neonrp tui --verbose

# launch with LM Studio (OpenAI-compatible local server)
neonrp tui --provider lmstudio
```

### TUI Features

- **Single-pane conversational layout**: Claude Code style transcript + input
- **Init-first workflow**: `/init` is enough to start build-mode work in a fresh directory
- **Builtin skills on fresh projects**: packaged skills appear even before local `skills/` exist
- **Slash command workflow**: Enter/Tab completion with command hints
- **Session management**: `/sessions` picker, `/continue`, `/new`
- **Direct Tavern import entry points**: `--import-card` on startup or `/import sillytavern-card ...` in-session
- **Verbose diagnostics**: `/verbose on|off|status` and `--verbose`
- **Streaming transcript**: assistant/tool/thinking updates rendered incrementally
- **Live token counters**: local real-time estimates in the top-right status bar

### Key Bindings

| Key | Action |
|-----|--------|
| `q` | Quit |
| `Ctrl+Q` | Safe quit (immediate terminal return) |
| `Esc` | Interrupt in-flight LLM turn |
| `Ctrl+K` | Command palette |
| `Ctrl+P` | Command palette |
| `F6` | Copy full transcript |

See [TUI Guide](docs/TUI_GUIDE.md) for complete documentation.

## Testing & Coverage

```bash
# fast regression
uv run pytest -q

# full coverage report
uv run pytest --cov=neonrp --cov-report=term-missing -q

# real endpoint e2e (uses ~/.neonrp/config.json)
NEONRP_E2E_LLM=1 uv run pytest tests/test_llm_real_endpoint.py -q
```

## Documentation

- [Product Roadmap](docs/PRODUCT_ROADMAP.md) - v0.1.x → v0.2 → v0.3 → v1.0 release direction
- [Release Notes](docs/releases/) - per-version changelogs
- [Tutorial: Your First Game](docs/TUTORIAL.md) - Zero-to-first-turn walkthrough
- [CLI Reference](docs/CLI_REFERENCE.md) - Complete command reference
- [TUI Guide](docs/TUI_GUIDE.md) - Terminal UI documentation
- [Architecture Overview](docs/ARCHITECTURE.md) - System design
- [Developer Guide](DEV_README.md) - Contribution guidelines

## Development

For contribution guidelines, architecture details, and the roadmap, see:
- [Developer Guide](DEV_README.md)
- [Architecture Overview](docs/ARCHITECTURE.md)
- [Documentation Directory](docs/)
