Metadata-Version: 2.4
Name: thigh-us-segmentation
Version: 0.1.4
Summary: Pipeline for thigh ultrasound segmentation using nnU-Net and feature extraction
Author: Mara Concepción Alvarez, Paula Crespo Ortega, Arantxa Villanueva Larre, Rafael Cabeza Laguna
Requires-Python: <3.11,>=3.10
Description-Content-Type: text/markdown
Requires-Dist: numpy<2.0,>=1.23
Requires-Dist: pandas<3.0,>=1.5
Requires-Dist: SimpleITK<3.0,>=2.2
Requires-Dist: opencv-python<5.0,>=4.7
Requires-Dist: scipy<1.14,>=1.9
Requires-Dist: matplotlib<4.0,>=3.6
Requires-Dist: requests<3.0,>=2.28
Requires-Dist: nnunetv2==2.6.4

# ThighUSSegmentation

Python library for automatic thigh ultrasound segmentation and analysis using nnU-Net.

## Features

- Automatic image conversion to `.mha`
- Preprocessing (active region detection)
- nnU-Net inference
- Muscle thickness computation
- Radiomics feature extraction
- Export of anatomical landmarks (`.mrk.json`)
- Automatic model download from Zenodo

## Installation

```bash
pip install thigh-us-segmentation
```

### Install PyRadiomics for texture extraction
If you want to use the texture extraction feature, you need to install the `PyRadiomics` package separately, as it is not included in the main package:

```bash
pip install pyradiomics=="3.0.1"
```

## Usage

```python
from ThighUSSegmentation import run_full_pipeline

result = run_full_pipeline(
    input_image_path="image.dcm",
    output_root="outputs",
    case_id = "case001"
)

print(result["df_results"])

# OR
result = run_full_pipeline(
    input_image_path="image.mha",
    output_root="outputs",
    models_root="D:/mis_modelos", #You can also specify your own model path.
)
```

## Output
```text
outputs/
└── case001/
    ├── image_converted.mha
    ├── image_preprocessed.mha
    ├── case001_labelmap.mha
    ├── markups/
    └── inference_log.txt
```

## Returned Results

The pipeline returns a dictionary:

{
    "case_id": str,
    "segmentation_path": str,
    "mrk_paths": dict,
    "df_distances": pd.DataFrame,
    "df_textures": pd.DataFrame,
    "df_results": pd.DataFrame
}

## Model

The pretrained nnU-Net model is automatically downloaded from Zenodo on first use.
[![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.19914473.svg)](https://doi.org/10.5281/zenodo.19914473)

Expected strucutre:
```text
Dataset001_ThighUS/
└── nnUNetTrainer__nnUNetPlans__2d/
    ├── dataset.json
    ├── plans.json
    ├── fold_0/
    ├── fold_1/
    ├── fold_2/
    ├── fold_3/
    └── fold_4/
```

## Requirements

· Python ≥ 3.9
· SimpleITK
· PyRadiomics
· OpenCV
· NumPy / SciPy / Pandas

## Citation

If you use this work, please cite:

Isensee et al., nnU-Net: a self-configuring method for deep learning-based biomedical image segmentation. Nature Methods (2021)

Mara Concepción Alavarez. (2026).
Thigh Ultrasound Segmentation Model (nnU-Net).
Zenodo. https://doi.org/10.5281/zenodo.19914473

## License

This project uses the following licenses:
· Code: MIT License (or the one you choose)
· Model weights (Zenodo): CC-BY 4.0 

