Metadata-Version: 2.4
Name: magdi-segmentation-models-3d
Version: 0.1
Summary: MAGDI Segmentation Models 3D
Author-email: Christian Hänig <christian.haenig@hs-anhalt.de>, Christian Gurski <christian.gurski@hs-anhalt.de>
License-Expression: MIT
Project-URL: Gitlab, https://gitlab.hs-anhalt.de/ki/projekte/magdi/magdi-data
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.12
Classifier: Operating System :: OS Independent
Requires-Python: >=3.12
Description-Content-Type: text/markdown; charset=UTF-8
License-File: LICENSE
Requires-Dist: torch>=2.6.0
Requires-Dist: torchvision>=0.21.0
Requires-Dist: numpy>=1.26.3
Requires-Dist: monai>=1.5.1
Requires-Dist: python-dotenv>=1.0.1
Requires-Dist: typing_extensions>=4.13.2
Requires-Dist: pyyaml>=6.0.2
Requires-Dist: transformers<5.0.0,>=4.55.2
Requires-Dist: huggingface-hub>=0.34.4
Requires-Dist: pillow>=11.2.1
Requires-Dist: optree>=0.17.0
Requires-Dist: acvl-utils>=0.2.3
Requires-Dist: dynamic-network-architectures>=0.4.1
Requires-Dist: tqdm
Requires-Dist: scipy
Requires-Dist: batchgenerators>=0.25.1
Requires-Dist: numpy>=1.24
Requires-Dist: scikit-learn
Requires-Dist: scikit-image>=0.19.3
Requires-Dist: SimpleITK>=2.2.1
Requires-Dist: pandas
Requires-Dist: graphviz
Requires-Dist: tifffile
Requires-Dist: requests
Requires-Dist: nibabel
Requires-Dist: matplotlib
Requires-Dist: seaborn
Requires-Dist: imagecodecs
Requires-Dist: yacs
Requires-Dist: batchgeneratorsv2>=0.3.0
Requires-Dist: einops
Requires-Dist: blosc2>=3.0.0b1
Requires-Dist: nnunetv2==2.6.2
Dynamic: license-file

# MAGDI Segmentation Models 3D

This python package named ``magdi_segmentation_models_3d`` responsible for providing
custom Hugging Face compatible models for 3D image segmentation for the project MAGDI.

## Hugging Face Custom Models

Documentation on Hugging Face: https://huggingface.co/docs/transformers/en/custom_models

Examples:
https://github.com/huggingface/transformers/tree/main/src/transformers/models

### mednext

MedNeXt implementation from monai wrapped as Hugging Face model.

#### References:

- 10.48550/arXiv.2303.09975

#### Usage example:

```python
from magdi_segmentation_models_3d import MedNeXtModel, MedNeXtConfig,

MedNeXtForImageSegmentation, MedNeXtImageProcessor

MedNeXtConfig.register_for_auto_class()
MedNeXtModel.register_for_auto_class("AutoModel")
MedNeXtForImageSegmentation.register_for_auto_class(
    "AutoModelForImageSegmentation"
)
MedNeXtImageProcessor.register_for_auto_class("AutoImageProcessor")

mednext_config = MedNeXtConfig(
    variant='B',
    spatial_dims=3,
    in_channels=1,
    out_channels=5,
    kernel_size=3,
    deep_supervision=False,
)
mednext_model = MedNeXtForImageSegmentation(mednext_config)
processor = MedNeXtImageProcessor()
```

### nnunetresenc

ResidualEncoderUNet from dynamic_network_architectures.architectures.unet wrapped as
Hugging Face model.
This architecture is also being used by nnUNet https://github.com/MIC-DKFZ/nnUNet.

#### References:

- 10.48550/arXiv.1809.10486
- 10.48550/arXiv.2404.09556

#### Usage example:

```python
from magdi_segmentation_models_3d import nnUNetResEncConfig, nnUNetResEncModel,

nnUNetResEncForImageSegmentation, nnUNetResEncImageProcessor

nnUNetResEncConfig.register_for_auto_class()
nnUNetResEncModel.register_for_auto_class("AutoModel")
nnUNetResEncForImageSegmentation.register_for_auto_class(
    "AutoModelForImageSegmentation"
)
nnUNetResEncImageProcessor.register_for_auto_class("AutoImageProcessor")

nnunet_config = nnUNetResEncConfig(
    variant="B",  # only B supported yet
    in_channels=1,
    out_channels=5,
    enable_deep_supervision=False,
)
nnunet_model = nnUNetResEncForImageSegmentation(nnunet_config)
processor = nnUNetResEncImageProcessor()
```

### stunet

STU-Net from https://github.com/Ziyan-Huang/STU-Net wrapped as Hugging Face model.

#### References:

- 10.48550/arXiv.2304.06716

#### Usage example:

```python
from magdi_segmentation_models_3d import STUNetConfig, STUNetModel,

STUNetForImageSegmentation, STUNetImageProcessor

STUNetConfig.register_for_auto_class()
STUNetModel.register_for_auto_class("AutoModel")
STUNetForImageSegmentation.register_for_auto_class(
    "AutoModelForImageSegmentation"
)
STUNetImageProcessor.register_for_auto_class("AutoImageProcessor")

stu_net_config = STUNetConfig(
    variant='B',
    in_channels=1,
    out_channels=5,
    kernel_size=[[3, 3, 3]] * 6,
    deep_supervision=True,
)
stu_net_model = STUNetForImageSegmentation(stu_net_config)
processor = STUNetImageProcessor()
```

### swinunetrv2

swinUNETRV2 implementation from monai wrapped as Hugging Face model.

#### References:

- 10.48550/arXiv.2201.01266

#### Usage example:

```python
from magdi_segmentation_models_3d import SwinUNETRv2Config, SwinUNETRv2Model,

SwinUNETRv2ForImageSegmentation, SwinUNETRv2ImageProcessor

SwinUNETRv2Config.register_for_auto_class()
SwinUNETRv2Model.register_for_auto_class("AutoModel")
SwinUNETRv2ForImageSegmentation.register_for_auto_class(
    "AutoModelForImageSegmentation"
)
SwinUNETRv2ImageProcessor.register_for_auto_class("AutoImageProcessor")
swin_unetr_v2_config = SwinUNETRv2Config(
    in_channels=1,
    out_channels=5,
    depths=(2, 2, 2, 2),
    num_heads=(3, 6, 12, 24),
    feature_size=48,
    patch_size=2,
    window_size=7,
    drop_rate=0.2,
    attn_drop_rate=0.2,
    dropout_path_rate=0.2,
    spatial_dims=3,
)
swinunetrv2_model = SwinUNETRv2ForImageSegmentation(swin_unetr_v2_config)

processor = SwinUNETRv2ImageProcessor()
```

### unet

Enhanced version of U-Net - Residual U-Net - implementation from monai wrapped as
Hugging Face model.

#### References:

- https://link.springer.com/chapter/10.1007/978-3-030-12029-0_40

#### Usage example:

```python
from magdi_segmentation_models_3d import UnetConfig, UnetModel,

UnetForImageSegmentation, UnetImageProcessor

UnetConfig.register_for_auto_class()
UnetModel.register_for_auto_class("AutoModel")
UnetForImageSegmentation.register_for_auto_class(
    "AutoModelForImageSegmentation"
)
UnetImageProcessor.register_for_auto_class("AutoImageProcessor")

unet_config = UnetConfig(
    in_channels=1,
    out_channels=5,
    channels=(64, 128, 256, 512, 1024),
    strides=(2, 2, 2, 2),
    num_res_units=2,
    spatial_dims=3,
    dropout=0.2,
)
unet_model = UnetForImageSegmentation(unet_config)
processor = UnetImageProcessor()
```
