Metadata-Version: 2.4
Name: panospace
Version: 0.1.0
Summary: High-resolution spatial transcriptomics analysis toolkit
Home-page: https://github.com/hehuifeng/PanoSpace
Author: Hui-Feng He
Author-email: Hui-Feng He <huifeng@mails.ccnu.edu.cn>
License: MIT
Project-URL: Documentation, https://github.com/hehuifeng/PanoSpace
Project-URL: Source, https://github.com/hehuifeng/PanoSpace
Project-URL: Tracker, https://github.com/hehuifeng/PanoSpace/issues
Keywords: spatial transcriptomics,single-cell,deep learning,bioinformatics,cell detection,spatial analysis
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Science/Research
Classifier: Operating System :: POSIX :: Linux
Classifier: Operating System :: MacOS
Classifier: Operating System :: Microsoft :: Windows
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Topic :: Scientific/Engineering :: Bio-Informatics
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.9
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: numpy>=1.23
Requires-Dist: pandas>=1.5
Requires-Dist: anndata>=0.10
Requires-Dist: scanpy>=1.9
Requires-Dist: python-igraph>=0.10
Requires-Dist: leidenalg>=0.9
Requires-Dist: scipy>=1.9
Requires-Dist: scikit-learn>=1.2
Requires-Dist: tqdm>=4.65
Requires-Dist: Pillow>=9.4
Requires-Dist: opencv-python>=4.8
Requires-Dist: matplotlib>=3.7
Requires-Dist: scikit-image>=0.22
Requires-Dist: requests>=2.31
Requires-Dist: ray>=2.7
Requires-Dist: pot>=0.9
Requires-Dist: qpsolvers>=4.0
Provides-Extra: cellvit
Requires-Dist: torch>=2.0; extra == "cellvit"
Requires-Dist: torchvision>=0.15; extra == "cellvit"
Requires-Dist: einops>=0.6; extra == "cellvit"
Requires-Dist: shapely>=2.0; extra == "cellvit"
Provides-Extra: annotation
Requires-Dist: torch>=2.0; extra == "annotation"
Requires-Dist: pyro-ppl>=1.8; extra == "annotation"
Requires-Dist: pytorch-lightning>=2.1; extra == "annotation"
Requires-Dist: lightning>=2.1; extra == "annotation"
Requires-Dist: transformers>=4.33; extra == "annotation"
Requires-Dist: scvi-tools>=1.0; extra == "annotation"
Requires-Dist: pyscipopt; extra == "annotation"
Requires-Dist: gurobipy>=10; extra == "annotation"
Provides-Extra: prediction
Provides-Extra: microenv
Requires-Dist: gseapy>=1.0; extra == "microenv"
Requires-Dist: statsmodels>=0.14; extra == "microenv"
Provides-Extra: all
Requires-Dist: einops>=0.6; extra == "all"
Requires-Dist: gseapy>=1.0; extra == "all"
Requires-Dist: gurobipy>=10; extra == "all"
Requires-Dist: lightning>=2.1; extra == "all"
Requires-Dist: pyro-ppl>=1.8; extra == "all"
Requires-Dist: pyscipopt; extra == "all"
Requires-Dist: pytorch-lightning>=2.1; extra == "all"
Requires-Dist: scvi-tools>=1.0; extra == "all"
Requires-Dist: shapely>=2.0; extra == "all"
Requires-Dist: statsmodels>=0.14; extra == "all"
Requires-Dist: torch>=2.0; extra == "all"
Requires-Dist: torchvision>=0.15; extra == "all"
Requires-Dist: transformers>=4.33; extra == "all"
Dynamic: author
Dynamic: home-page
Dynamic: license-file
Dynamic: provides-extra
Dynamic: requires-dist
Dynamic: requires-python

# PanoSpace

**High-resolution single-cell insight from low-resolution spatial transcriptomics**

![PanoSpace overview](figures/fig1.png)

PanoSpace bridges the gap between spot-based spatial transcriptomics (e.g., 10x
Visium) and single-cell resolution. It combines histology-guided cell detection,
transcriptomic deconvolution, deep-learning-based super-resolution, expression
prediction, and microenvironment analysis to generate consistent cell-level maps
across entire tissue sections.


## 📦 Installation

### System Requirements

- **OS**: Linux (strongly recommended)
- **GPU**: NVIDIA GPU with CUDA support (strongly recommended for performance)
  - CUDA 12.1+ recommended
  - Minimum 8GB GPU memory

### Installation

**Option 1: Install from PyPI (Recommended for Users)**

```bash
# Basic installation (lightweight, no PyTorch)
pip install panospace

# For cell detection functionality (includes PyTorch)
pip install panospace[cellvit]

# For cell annotation functionality (includes deep learning libraries)
pip install panospace[annotation]

# For microenvironment analysis (lightweight)
pip install panospace[microenv]

# For all functionality
pip install panospace[all]
```

Then set up conda environment for dependencies:
```bash
# Create a conda environment
conda create -n panospace python=3.11
conda activate panospace

# Install PyTorch manually (if using [cellvit] or [annotation])
# For GPU version:
pip install --extra-index-url https://download.pytorch.org/whl/cu121 torch>=2.1 torchvision>=0.15

# For CPU version:
pip install torch>=2.1 torchvision>=0.15
```

**Option 2: Install from Source (Automatic Setup)**

```bash
git clone https://github.com/hehuifeng/PanoSpace.git
cd PanoSpace
bash install.sh
```

The script will automatically:
- Create conda environment with all dependencies (except PyTorch)
- Detect your GPU
- Install PyTorch via pip (with CUDA if GPU detected)
- Install PanoSpace package
- Verify the installation

**Option 3: Manual Installation from Source**

For **GPU version** (recommended):
```bash
# Step 1: Create conda environment
conda env create -f environment-gpu.yml
conda activate PanoSpace

# Step 2: Install PyTorch with CUDA support
pip install --extra-index-url https://download.pytorch.org/whl/cu121 torch>=2.1 torchvision>=0.15

# Step 3: Install PanoSpace
pip install .
```

For **CPU-only version**:
```bash
# Step 1: Create conda environment
conda env create -f environment.yml
conda activate PanoSpace

# Step 2: Install PyTorch (CPU-only)
pip install torch>=2.1 torchvision>=0.15

# Step 3: Install PanoSpace
pip install .
```

<details>
<summary><b>Optional: Optimization Solvers (Click to expand)</b></summary>

#### Optimization Solvers for Cell Annotation

PanoSpace uses **Mixed Integer Linear Programming (MILP)** solvers for accurate cell-type annotation with spot-level quota constraints. Two solvers are supported:

**Supported Solvers:**

1. **Gurobi** (Recommended, Commercial but Free for Academia)
   - Significantly faster (10-100x speedup on large datasets)
   - Best for production use and large-scale analyses
   - Free academic license available at: https://www.gurobi.com/academia/academic-program-and-licenses/

2. **SCIP** (Open-Source, Default)
   - Automatically installed with PanoSpace
   - Produces mathematically identical results to Gurobi
   - Suitable for small to medium datasets
   - No additional setup required

**Solver Selection Logic:**

PanoSpace automatically selects the best available solver:
- If **Gurobi is installed** → Uses Gurobi (fastest)
- If **Gurobi is not available** → Uses SCIP (open-source fallback)

Both solvers implement the **same mathematical model** with:
- Global cell-type quotas
- Spot-level quota constraints (ensures consistency within each spot)
- Exact 0/1 assignment (no approximation)

**Installation:**

**SCIP** (installed by default):
```bash
# Already included in environment.yml
conda activate PanoSpace
```

**Gurobi** (optional, recommended for better performance):
```bash
# Install Gurobi
conda install -c conda-forge gurobipy

# Request free academic license at: https://www.gurobi.com/academia/academic-program-and-licenses/
# Follow Gurobi's instructions to activate the license

# Verify installation
python -c "import gurobipy; print('Gurobi installed successfully!')"
```

*Note: Based on our experience, Gurobi typically solves problems in under 1 minute, while SCIP may take hundreds of minutes for the same problem.*



</details>

## 🚀 Quick Start

### Basic Workflow

```python
import panospace as ps
from PIL import Image

# 1. Detect cells from tissue image
tissue = Image.open("path/to/visium_slide.tif")
seg_adata, contours = ps.detect_cells(tissue, model="cellvit", gpu=True)

# 2. Deconvolve Visium spots
#    visium_adata: AnnData with .X (expression) and .obsm['spatial'] (coordinates)
#    sc_reference: AnnData with .X and .obs[celltype_key] (cell type labels)
deconv_adata = ps.deconv_celltype(
    adata_vis=visium_adata,
    sc_adata=sc_reference,
    celltype_key="celltype_major",  # Column name in sc_reference.obs
    methods=['RCTD', 'spatialDWLS', 'cell2location']
)

# 3. Super-resolve to cell level
sr_adata = ps.superres_celltype(
    deconv_adata=deconv_adata,
    img_dir="path/to/visium_slide.tif"
)

# 4. Annotate segmented cells
annotated_adata = ps.celltype_annotator(
    decov_adata=visium_adata,
    sr_deconv_adata=sr_adata,
    seg_adata=seg_adata
)

# 5. Predict gene expression
pred_adata = ps.genexp_predictor(
    sc_adata=sc_reference,
    spot_adata=visium_adata,
    infered_adata=annotated_adata,
    celltype_list=list(sc_reference.obs["celltype_major"].unique())
)
```


### Cell-Cell Interaction Analysis

```python
# Analyze interactions between cell pairs
pairs = [('Cancer_epithelial', 'CAF'), ('T_cell', 'Macrophage')]
results = ps.analyze_interaction(
    adata=annotated_adata,
    cell_type_pairs=pairs,
    cell_type_col='pred_cell_type',  # Column in adata.obs
    radius=100.0  # Neighborhood radius (same units as spatial coordinates)
)

# Extract results and find correlated genes
expr_df, target_abundance, _ = results[('Cancer_epithelial', 'CAF')]
corr_results = ps.correlation_analysis(expr_df, target_abundance)
significant_genes = corr_results.query('p_adjust < 0.05')['gene'].tolist()

# Functional enrichment
if len(significant_genes) > 0:
    go_results = ps.spatial_enrichment(
        gene_list=significant_genes,
        organism='Human',
        gene_sets='GO_Biological_Process_2021'
    )
```

### Data Requirements

**Visium Data** (`visium_adata`)
- AnnData object with `.X` (gene expression) and `.obsm['spatial']` (coordinates)

**Single-Cell Reference** (`sc_reference`)
- AnnData object with `.X` and `.obs[celltype_key]` (cell type labels)
- Minimum 100 cells per type, genes should overlap with Visium data

**Histology Image**
- TIFF/PNG/JPEG format, 40x+ magnification recommended



## 📖 Citation

If you use PanoSpace in your research, please cite:

He, HF., Peng, P., Yang, ST. et al. Unlocking single-cell level and continuous whole-slide insights in spatial transcriptomics with PanoSpace. *Nat Comput Sci* (2026). https://doi.org/10.1038/s43588-025-00938-y


## 📧 Contact

For questions or collaboration opportunities:

- **Hui-Feng He** (<huifeng@mails.ccnu.edu.cn>)
- **Xiao-Fei Zhang** (<zhangxf@ccnu.edu.cn>)

## 📄 License

This project is licensed under the MIT License - see the LICENSE file for details.


---

**Note:** PanoSpace is actively under development. API changes may occur between
versions. Please check the changelog when upgrading.
