Metadata-Version: 2.4
Name: isoview
Version: 0.1.1
Summary: Multi-view light sheet microscopy image processing pipeline
Project-URL: Homepage, https://github.com/MillerBrainObservatory/isoview
Project-URL: Repository, https://github.com/MillerBrainObservatory/isoview
Author: Miller Brain Observatory
License: MIT
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Science/Research
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Bio-Informatics
Classifier: Topic :: Scientific/Engineering :: Image Processing
Requires-Python: ==3.12.9
Requires-Dist: cellpose>=3.0.0
Requires-Dist: dask[array]>=2024.1.0
Requires-Dist: fsspec>=2023.12.0
Requires-Dist: glfw>=2.8.0
Requires-Dist: h5py>=3.10.0
Requires-Dist: imageio>=2.31.0
Requires-Dist: ipywidgets>=8.1.0
Requires-Dist: jupyterlab
Requires-Dist: jupyterlab-vim>=4.1.0
Requires-Dist: matplotlib>=3.7.0
Requires-Dist: mbo-fastplotlib[imgui,notebook]
Requires-Dist: mbo-utilities[all]>=2.6.0
Requires-Dist: numpy<3.0,>=1.26.0
Requires-Dist: ome-zarr>=0.9.0
Requires-Dist: opencv-python>=4.8.0
Requires-Dist: pyklb>=0.3.0
Requires-Dist: scikit-image>=0.22.0
Requires-Dist: scikit-learn>=1.3.0
Requires-Dist: scipy>=1.11.0
Requires-Dist: tifffile>=2023.7.0
Requires-Dist: tqdm>=4.66.0
Requires-Dist: trackpy>=0.7
Requires-Dist: xmltodict>=0.13.0
Requires-Dist: zarr>=3.0.0
Provides-Extra: dev
Requires-Dist: mypy>=1.5.0; extra == 'dev'
Requires-Dist: pytest-cov>=4.1.0; extra == 'dev'
Requires-Dist: pytest>=7.4.0; extra == 'dev'
Requires-Dist: ruff>=0.0.280; extra == 'dev'
Description-Content-Type: text/markdown

# IsoView Light Sheet Microscopy Pipeline

## Acquisition Modes

Raw input is always flat — `SPC##_TM#####_ANG###_CM#_CHN##_PH#.stack` files in a single directory.
Mode is auto-detected from SPC/TM counts:

| Condition | Mode | Description |
|-----------|------|-------------|
| Multiple TM values | timelapse | time series, any number of specimens |
| Single TM + multiple SPC | tiled | spatial tiles, one timepoint |
| Single TM + single SPC | single | treated as timelapse with 1 timepoint |

## Filename Tags

| Tag | Meaning | Raw | Corrected |
|-----|---------|-----|-----------|
| `SPC##` / `SPM##` | specimen | `SPC00` | `SPM00` |
| `TM#####` | timepoint | `TM00000` | `TM000000` (6 digits) |
| `CM#` | camera | `CM0` | `CM00` |
| `CHN##` | channel (raw) | `CHN00`, `CHN01` | — |
| `VW##` | view (corrected) | — | `VW00` (z-scan), `VW90` (y-scan) |
| `ANG###` | illumination angle | `ANG000` | — (dropped) |
| `PH#` | phase | `PH0` | — (dropped) |

## Raw Input Layout

```
data_root/
├── Background_0.tif, Background_1.tif, ...
├── ch00_spec00.xml, ch00_spec01.xml, ...
├── SPC00_TM00000_ANG000_CM0_CHN00_PH0.stack
├── SPC00_TM00000_ANG000_CM1_CHN00_PH0.stack
├── SPC00_TM00000_ANG000_CM2_CHN01_PH0.stack
├── SPC00_TM00000_ANG000_CM3_CHN01_PH0.stack
├── SPC01_TM00000_ANG000_CM0_CHN00_PH0.stack   (tiled: more SPC##)
├── SPC00_TM00001_ANG000_CM0_CHN00_PH0.stack   (timelapse: more TM#####)
└── ...
```

XML files: `ch##_spec##.xml` maps to `SPC##` (e.g. `ch00_spec01.xml` → `SPC01`).
Backgrounds: shared across all specimens/timepoints.

## Output Layout

Output directory: `{input_dir.name}{corrected_suffix}` as sibling of input.

### Correction (`correct_stack`)

| Mode | Path | Filename |
|------|------|----------|
| Timelapse | `root.corrected/SPM00/TM000000/` | `SPM00_TM000000_CM00_VW00.ome.tif` |
| Tiled | `root.corrected/SPM00/` | `SPM00_CM00_VW00.ome.tif` |

Per-camera: corrected volume, segmentation mask, xz/xy masks, xy projection, min intensity.

### Fusion (`multi_fuse`)

| Mode | Path | Filename |
|------|------|----------|
| Timelapse | `root.corrected/Results/MultiFused_adaptive/SPM00/TM000000/` | `SPM00_TM000000_CM00_CM01_VW00.ome.tif` |
| Tiled | `root.corrected/Results/MultiFused_adaptive/SPM00/` | `SPM00_CM00_CM01_VW00.ome.tif` |

Per-pair: fused volume, projections, transformation, intensity correction, masks.

## Supported Output Formats

| Format | Extension | Notes |
|--------|-----------|-------|
| OME-TIFF | `.ome.tif` | with metadata, optional resolution pyramids |
| Zarr v3 | `.zarr` | OME-NGFF metadata |
| KLB | `.klb` | Keller Lab Block (bzip2) |

## Usage

Entrypoints: `pipeline/correct_stack.py` (correction) and `pipeline/multi_fuse.py` (fusion).

```python
from pathlib import Path
from isoview import ProcessingConfig, correct_stack, multi_fuse

config = ProcessingConfig(
    input_dir=Path(r"E:\isoview\dataset"),
    # specimens=None,                       # auto-detect from SPC## in filenames
    # timepoints=None,                      # auto-detect from TM## in filenames
    corrected_suffix=".corrected",          # output folder suffix
    specimen=0,                             # default specimen index
    camera_pairs=[(0, 1), (2, 3)],          # ortho camera pairs to fuse

    # output
    output_format="tif",                    # tif, zarr, or klb
    compression="zstd",                     # zstd, lzw, deflate, or None
    compression_level=3,                    # 1-22 for zstd, 1-9 for others

    # transforms (applied to second camera in each pair)
    rotation=0,                             # 0=none, 1=90cw, -1=90ccw
    flip_horizontal=False,
    flip_vertical=False,

    # correction
    median_kernel=(3, 3),                   # dead pixel filter, None to disable
    background_percentile=5.0,
    mask_percentile=1.0,
    segment_mode=1,                         # 0=none, 1=segment+mask, 2=masks, 3=global

    # fusion
    blending_method="adaptive",             # adaptive, geometric, average, wavelet
    blending_range=20,                      # transition zone width (z-planes)

    # per-specimen overrides (tiled mode)
    # tile_crops={"SPM00": {"crop_depth": {0: 450}}}
    # view_orientation={"SPM00": {"flip_axis": 1}, "SPM01": {"flip_axis": 0}}
)

correct_stack(config)
multi_fuse(config, estimate_params=True, apply_fusion=True)
```

## Processing Flow

1. **Correction** (`correct_stack()` / `IsoviewProcessor`): reads raw `.stack` + XML, applies dead pixel correction, foreground segmentation, generates masks/projections, outputs corrected volumes.

2. **Fusion** (`multi_fuse()`): registers camera pairs via phase cross-correlation, applies intensity correction, blends views (adaptive/geometric/wavelet), outputs fused volumes.

## Key Modules

| Module | Purpose |
|--------|---------|
| `config.py` | `ProcessingConfig` dataclass |
| `pipeline.py` | `IsoviewProcessor` orchestrator |
| `io.py` | multi-format I/O, project detection |
| `fusion.py` | multi-view fusion |
| `segmentation.py` | foreground detection |
| `corrections.py` | dead pixel / background correction |
| `transforms.py` | camera registration, geometric transforms |
| `masks.py` | mask generation |
| `intensity.py` | per-camera intensity correction |

## Camera Configuration

Default pairs: `[(0, 1), (2, 3)]` — cameras 0,1 share CHN00/VW00; cameras 2,3 share CHN01/VW90.
Only the second camera in each pair gets rotation/flip transforms.
