Metadata-Version: 2.4
Name: SatVision
Version: 0.0.3
Summary: Opinionated Inference Framework for Remote Sensing Deep Learning Applications.
Author: Lionel Peer
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE.md
Requires-Dist: jaxtyping>=0.3.3
Requires-Dist: beartype>=0.22.2
Requires-Dist: torch>=2.2.0
Requires-Dist: lightning-utilities>=0.15.2
Requires-Dist: pydantic>=2.12.4
Requires-Dist: torchvision>=0.24.1
Requires-Dist: einops>=0.8.1
Requires-Dist: polars>=1.37.0
Requires-Dist: pyarrow>=22.0.0
Requires-Dist: scipy>=1.15.3
Requires-Dist: tqdm>=4.67.1
Requires-Dist: pyarrow-stubs>=20.0.0.20251215
Requires-Dist: scipy-stubs>=1.15.3.0
Requires-Dist: types-tqdm>=4.67.3.20260303
Provides-Extra: notebook
Requires-Dist: jupyter>=1.1.1; extra == "notebook"
Provides-Extra: onnx
Requires-Dist: onnx>=1.19.1; extra == "onnx"
Requires-Dist: onnxscript>=0.5.3; extra == "onnx"
Requires-Dist: onnxruntime-gpu>=1.24.3; sys_platform == "linux" and extra == "onnx"
Requires-Dist: onnxruntime>=1.24.3; sys_platform != "linux" and extra == "onnx"
Provides-Extra: transformers
Requires-Dist: transformers>=5.3.0; extra == "transformers"
Dynamic: license-file

<!--
SPDX-License-Identifier: MIT
Copyright (c) 2025–2026 Lionel Peer
-->
# SatVision
[![docs-build](https://github.com/liopeer/SatVis/actions/workflows/docs-build.yml/badge.svg)](https://github.com/liopeer/SatVis/actions/workflows/docs-build.yml)
[![docs-deploy](https://github.com/liopeer/SatVis/actions/workflows/docs-deploy.yml/badge.svg)](https://github.com/liopeer/SatVis/actions/workflows/docs-deploy.yml)
[![docs-deploy-main](https://github.com/liopeer/SatVis/actions/workflows/docs-deploy-main.yml/badge.svg)](https://github.com/liopeer/SatVis/actions/workflows/docs-deploy-main.yml)
[![pypi-publish](https://github.com/liopeer/SatVis/actions/workflows/pypi-publish.yml/badge.svg)](https://github.com/liopeer/SatVis/actions/workflows/pypi-publish.yml)
[![static-checks](https://github.com/liopeer/SatVis/actions/workflows/static-checks.yml/badge.svg)](https://github.com/liopeer/SatVis/actions/workflows/static-checks.yml)

An opinionated framework for deploying computer vision models on remote sensing imagery.

## 🚀 Getting Started
The framework supports both local Python inference and scalable deployment via NVIDIA Triton Inference Server.

### 1. 💻 Local Inference
Best for development and direct integration into Python applications.

**Installation:**
```bash
pip install SatVision
# For TensorRT support (requires NVIDIA GPU and drivers):
pip install "SatVision[tensorrt-cu12]"
```

**Usage Example:**
```python
import satvis
from pathlib import Path

# Initialize model (supports 'torch' or 'tensorrt' backends)
# Note: 'tensorrt' backend requires the optional dependency installed above.
model = satvis.get_inference_model(
    model_name="torchvision::resnet50", 
    # override_args={"backend_type": "tensorrt"} # Uncomment to use TensorRT
)

image_path = Path("path/to/your/image.jpg") # Replace with your image path

predictions = model.predict(
    image=image_path, 
    apply_transform=True
)

best_prediction = max(predictions.items(), key=lambda item: item[1])

print(f"Best prediction: {best_prediction}")
```

### 2. 🏗️ Server Deployment
Deploy models using NVIDIA Triton Inference Server for high-performance serving.

**Setup:**

1. **Clone the repository:**
   ```bash
   git clone https://github.com/liopeer/SatVis.git
   cd SatVis
   ```

2. **Install `just` (Command Runner):**
   Follow the instructions at [Just Docs](https://just.systems/man/en/introduction.html)
   to install `just` for running predefined commands.

3. **Install Dependencies:**
   This will install `uv` (package manager) and project dependencies.
   ```bash
   just install-uv
   just install-dev
   source .venv/bin/activate
   ```
   This installs `tensorrt==10.9.0.34`. [This version is compatible](https://docs.nvidia.com/deeplearning/frameworks/support-matrix/index.html#framework-matrix-2025) 
   with Nvidia Triton Server `nvcr.io/nvidia/tritonserver:25.03-py3` and driver `>=570`.

4. **Generate Models:**
   Create the optimized ONNX/TensorRT models for Triton Server.
   ```bash
   python server/generate_models.py
   ```

5. **Launch Triton Server:**
   *Requirements: NVIDIA Driver >= 570, Docker with CUDA Toolkit.*
   ```bash
   docker compose -f server/docker-compose.yml up -d
   ```

6. **Run Inference Client:**
   Send an inference request to the running server.
   ```bash
   uv run scripts/predict_resnet_http.py
   ```

## 🗺️ Roadmap
### Q4 2025
- [ ] Core PyTorch Framework with Inference and ONNX Export:
    - [x] Classification
    - [ ] Image Embeddings
    - [ ] Language Embeddings
- [ ] TensorRT Export
    - [x] FP32/FP16 export
    - [ ] verification
- [ ] Model Serving
    - [x] Nvidia Triton server with kserver API
    - [ ] Model Zoo for classification and embeddings
- [ ] Documentation of REST API
- [ ] Basic CI/CD
    - [ ] Inference server to container registry
    - [ ] Unit tests with pytest and coverage

### Q1 2026
- [ ] Model Training
    - [ ] Panoptic Segmentation
    - [ ] CLIP-style training for image and language embeddings
- [ ] Multi-Spectral / Hyperspectral support
    - [ ] I/O for various formats (GeoTIFF, HDF5, NetCDF)
    - [ ] Data Augmentation for multi-spectral data
    - [ ] Models
- [ ] Documentation for Python API

### Q2 2026
- [ ] Model Quantization
    - [ ] Int8 PTQ (Post Training Quantization) with calibration

### Small Projects
- PostGIS sampler for PyTorch DataLoader

## 👤 Maintainers
- Lionel Peer (lionel.peer@gmail.com)
