Metadata-Version: 2.4
Name: neurocog-rime-ui
Version: 0.1.0
Summary: Desktop Qt UI for the RIME multimodal annotation platform
Author: Lab Neurocog
License-Expression: MIT
Requires-Python: >=3.10
Requires-Dist: duckdb>=0.9
Requires-Dist: neurocog-rime-core>=0.1.0
Requires-Dist: opencv-contrib-python>=4.8
Requires-Dist: pyqtgraph>=0.13
Requires-Dist: pyside6>=6.6
Requires-Dist: qtawesome>=1.3
Provides-Extra: docs
Requires-Dist: mkdocs-material>=9.5; extra == 'docs'
Requires-Dist: mkdocs>=1.5; extra == 'docs'
Description-Content-Type: text/markdown

# neurocog-rime-ui

`neurocog-rime-ui` is the Qt desktop application for RIME. It builds on `rime_core` to provide session creation, timeline-based annotation, schema-aware editing, model review, signal visualization, comparison workflows, and export tooling.

## Install

```bash
pip install neurocog-rime-ui
```

For local development:

```bash
pip install -e packages/rime-core
pip install -e packages/rime-ui
```

Optional extras:

```bash
pip install -e "packages/rime-ui[docs]"
```

## Quick Start

```bash
rime
python -m rime_ui
```

Open assets directly on launch:

```bash
rime --open /path/to/session.json
rime --open /path/to/session.json --compare /path/to/comparison_session.json
rime --open /path/to/session.json --model /path/to/model.rime
```

Typical workflow:

1. Create or open a session.
2. Load videos and optional signals.
3. Annotate against the active protocol schema.
4. Review pending ghost annotations from model output.
5. Export reports, Parquet datasets, or media clips.
