Metadata-Version: 2.4
Name: motionscorehrpqct
Version: 2.5.1
Summary: MotionScoreHRpQCT core CLI for dataset-first HR-pQCT motion grading
Home-page: https://github.com/wallematthias/MotionScoreHRpQCT
Author: Matthias Walle
Author-email: matthias.walle@ucalgary.ca
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Operating System :: OS Independent
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: opencv-python<4.12
Requires-Dist: scikit-image
Requires-Dist: Pillow
Requires-Dist: numpy<2.0,>=1.23
Requires-Dist: aimio-py
Provides-Extra: torch
Requires-Dist: torch>=2.2; extra == "torch"
Provides-Extra: preview
Requires-Dist: matplotlib; extra == "preview"
Provides-Extra: explain
Requires-Dist: SimpleITK; extra == "explain"
Requires-Dist: matplotlib; extra == "explain"
Provides-Extra: test
Requires-Dist: pytest>=7; extra == "test"
Requires-Dist: pytest-cov>=4; extra == "test"
Requires-Dist: matplotlib; extra == "test"
Dynamic: author
Dynamic: author-email
Dynamic: classifier
Dynamic: description
Dynamic: description-content-type
Dynamic: home-page
Dynamic: license-file
Dynamic: provides-extra
Dynamic: requires-dist
Dynamic: requires-python
Dynamic: summary

<img src="resources/MotionScoreHRpQCT.png" alt="MotionScoreHRpQCT logo" width="240" />

# MotionScoreHRpQCT

[![CI](https://github.com/wallematthias/MotionScoreHRpQCT/actions/workflows/ci.yml/badge.svg?branch=main)](https://github.com/wallematthias/MotionScoreHRpQCT/actions/workflows/ci.yml)
[![Coverage Gate](https://img.shields.io/badge/coverage%20gate-70%25-brightgreen)](https://github.com/wallematthias/MotionScoreHRpQCT/blob/main/.github/workflows/ci.yml)
[![PyPI](https://img.shields.io/pypi/v/motionscorehrpqct.svg)](https://pypi.org/project/motionscorehrpqct/)
[![PyPI Downloads](https://img.shields.io/pypi/dm/motionscorehrpqct.svg)](https://pypi.org/project/motionscorehrpqct/)

Motion scoring for HR-pQCT scans using deep convolutional neural networks.

This refactor provides a dataset-first pipeline with BIDS-style derivatives and review-state persistence for direct Slicer integration.

Related repositories:
- Core pipeline (this repo): https://github.com/wallematthias/MotionScoreHRpQCT
- Slicer extension: https://github.com/wallematthias/SlicerMotionScoreHRpQCT

## What Changed In v2

- Legacy CLI commands `grade` and `confirm` are removed.
- New dataset-driven commands: `discover`, `predict`, `review-init`, `review-apply`, `explain`, `export`.
- Default output structure is now:

```text
<dataset_root>/derivatives/MotionScore/
  index.tsv
  dataset_description.json
  <mirrored-source-path-or-flat-aim-name>/
    predictions/predictions.tsv
    preview/<scan_id>_preview.png
    preview/<scan_id>_slice_profile.png
    review/review.tsv
    review/review.json
    review/review_audit.tsv
    explain/<scan_id>_gradcam.mha
```

- AIM reading now uses `aimio-py`.
- Python baseline is now `>=3.10`.
- Output path mapping:
  - Flat input (`*.AIM` directly in dataset root): outputs are grouped under folder named after each AIM file stem.
  - Structured input (nested folders): outputs mirror the source folder structure under `MotionScore`.
- Raw-vs-mask identification:
  - Primary: AIM header processing log (ISQ-origin markers indicate raw images).
  - Fallback: filename-based heuristics when header signal is unavailable.

## Installation

```bash
conda create -n motionscore python=3.10 -y
conda activate motionscore

# Clone
# git clone <repo-url>
# cd MotionScoreHRpQCT

# Install CLI + torch inference backend
pip install -e ".[torch]"
```

## Models

Use a model registry rooted at `--model-root` (default `~/.motionscore/MotionScore/models`).

Each registered profile points to a directory containing torch checkpoints:
- `DNN_0.pt`, `DNN_1.pt`, ... (ensemble members)
- `model_registry.json` at the model root

Model weights are licensed for usage tracking. In the current deployment configuration, licenses are automatically granted at signup.

## CLI Usage

### 1) Discover scans

```bash
motionscore discover /path/to/dataset
motionscore discover /path/to/dataset --json
```

### 2) Run prediction + initialize review tables

```bash
motionscore predict /path/to/dataset --confidence-threshold 75
# choose a registered model profile
motionscore predict /path/to/dataset --model-id base-v1
# blinded operator training mode
motionscore predict /path/to/dataset --training-mode
# optional: restrict to one scan_id (repeat flag for multiple)
motionscore predict /path/to/dataset --scan-id sub-001_site-tibia_ses-T1_abcdef1234
# optional quick-look PNG controls
motionscore predict /path/to/dataset --preview-panels 5
motionscore predict /path/to/dataset --no-preview-png
```

Default output root:

```text
/path/to/dataset/derivatives/MotionScore
```

Optional custom output root:

```bash
motionscore predict /path/to/dataset --output-root /tmp/results
```

### 3) Update review threshold policy

```bash
motionscore review-init /path/to/dataset/derivatives/MotionScore --confidence-threshold 90
# keep/enable blinded operator training mode
motionscore review-init /path/to/dataset/derivatives/MotionScore --training-mode
```

`--confidence-threshold 100` effectively requires review of all scans.
When `--training-mode` is enabled, pending scans are always operator-first and AI prediction is revealed after manual grading.

### 4) Apply manual review decision for one scan

```bash
motionscore review-apply /path/to/dataset/derivatives/MotionScore \
  --scan-id sub-001_site-tibia_ses-T1_abcdef1234 \
  --manual-grade 3 \
  --reviewer mwalle
```

### 4b) Clear manual grades for re-review

```bash
# clear one operator across all scans
motionscore review-clear /path/to/dataset/derivatives/MotionScore --reviewer opA

# clear everyone
motionscore review-clear /path/to/dataset/derivatives/MotionScore --all-reviewers
```

### 5) Generate on-demand Grad-CAM attention map

```bash
motionscore explain /path/to/dataset/derivatives/MotionScore \
  --scan-id sub-001_site-tibia_ses-T1_abcdef1234
```

### 6) Export final grade table

```bash
motionscore export /path/to/dataset/derivatives/MotionScore
```

Writes `motion_grades.tsv` at the derivatives root (or custom `--output`).
Export includes current per-scan review state plus machine-readable multi-reviewer summary columns:
- `reviewer_count`
- `reviewers` (pipe-delimited reviewer IDs)
- `consensus_method` (currently `mean_manual_grade`)
- `consensus_mean_manual_grade`
- `consensus_grade_rounded`
- dynamic reviewer slots: `reviewer_1_id`, `reviewer_1_grade`, `reviewer_2_id`, `reviewer_2_grade`, ...

### 7) Prepare slice-level retraining manifest (manual labels take priority)

```bash
motionscore train-prepare /path/to/dataset/derivatives/MotionScore \
  --output /path/to/dataset/derivatives/MotionScore/training/train_manifest.tsv \
  --slice-count 8 \
  --seed 13 \
  --cv-folds 10 \
  --min-auto-confidence 0.70 \
  --include-auto-without-manual
```

Training label policy:
- If a manual scan grade exists, it is treated as ground truth and propagated to all slices.
- If no manual grade exists, per-slice auto labels are used (confidence-filtered).
- `--slice-count 8` samples eight randomized, spread-out slices per scan (seeded, reproducible).
- Set `--slice-count 0` to disable random count sampling and use `--slice-step` instead.
- `train-prepare` also builds a slice-wise cache database at `training/slice_db/*.npy` and records
  `cache_npy_path` + `cache_index` in the manifest so training can load preprocessed slices directly.
- The manifest includes `fold_id` for strict fold-aware retraining.
- `--seed` controls deterministic slice sampling, subject splitting, and fold assignment.
- `--cv-folds` sets how many folds are assigned in the manifest (typically match ensemble checkpoint count).

### 8) Transfer-learn from base model weights (PyTorch)

```bash
motionscore train \
  --manifest /path/to/dataset/derivatives/MotionScore/training/train_manifest.tsv \
  --model-root ~/.motionscore/MotionScore/models \
  --init-model-id base-v1 \
  --early-stopping-patience 10 \
  --seed 13 \
  --output-model-dir ~/.motionscore/MotionScore/models/knee-v1
```

Fold-aware retraining behavior:
- `fold_id` is required in the manifest.
- For ensemble model `i`: `test` fold = `i`, `val` fold = `(i+1) mod k`, and training uses all remaining folds.
- Training fails fast if fold metadata is missing/invalid or if fold-derived train/val/test subsets are empty.

Training writes:
- `training_metrics.json`
- `training_plot_live.png` (updated every epoch)
- `training_plot.png` (final summary plot)
- `training_plot_model_<n>.png` (per-ensemble-model curves)

### 9) Register and select custom model profiles

```bash
motionscore model-register \
  --model-root ~/.motionscore/MotionScore/models \
  --model-id knee-v1 \
  --model-dir ~/.motionscore/MotionScore/models/knee-v1 \
  --display-name "Knee Transfer v1" \
  --domain knee \
  --version v1

motionscore model-list --model-root ~/.motionscore/MotionScore/models
```

## Slicer Integration Contract

This repository is core logic only. A separate Slicer extension should:

1. run `motionscore predict ...` from a `Run` button,
2. run `motionscore review-apply ...` as reviewers step through scans,
3. request `motionscore explain ...` on demand to overlay Grad-CAM maps,
4. optionally show `preview/*_preview.png` for quick QC,
5. load all outputs from derivatives without ad hoc state files.

Reference Slicer repository:
- https://github.com/wallematthias/SlicerMotionScoreHRpQCT

## Citation

If you use this software, please cite:

Walle, M., Eggemann, D., Atkins, P.R., Kendall, J.J., Stock, K., Müller, R. and Collins, C.J., 2023. Motion grading of high-resolution quantitative computed tomography supported by deep convolutional neural networks. *Bone*, 166, p.116607. https://doi.org/10.1016/j.bone.2022.116607
