Metadata-Version: 2.4
Name: mobilesam_lite
Version: 0.1.2
Summary: Unofficial MobileSAMv2 and MobileSAM software package for lightweight Segment Anything and everything inference.
Author: bill2239
License: Apache-2.0
Project-URL: Homepage, https://github.com/bill2239/mobilesam_lite
Project-URL: Repository, https://github.com/bill2239/mobilesam_lite
Project-URL: Issues, https://github.com/bill2239/mobilesam_lite/issues
Keywords: segmentation,sam,segment-anything,computer-vision,pytorch
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.8
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: matplotlib>=3.3
Requires-Dist: numpy>=1.23
Requires-Dist: opencv-python>=4.6
Requires-Dist: Pillow>=7.1.2
Requires-Dist: psutil>=5.9
Requires-Dist: PyYAML>=5.3.1
Requires-Dist: requests>=2.23
Requires-Dist: scipy>=1.4.1
Requires-Dist: torch>=2.1
Requires-Dist: torchvision>=0.16
Requires-Dist: timm>=0.9.12
Requires-Dist: tqdm>=4.64
Provides-Extra: dev
Requires-Dist: build>=1.2.2; extra == "dev"
Requires-Dist: twine>=5.1.1; extra == "dev"
Dynamic: license-file

# MobileSAM_lite 

An unofficial Python package for MobileSAM and MobileSAMv2 runtime that adds support for lighter encoder models not available in the original implementation.

This package vendors the runtime code needed for inference:

- `mobilesamv2`
- `tinyvit`
- `efficientvit`
- `ultralytics` under `mobilesam_lite/_vendor/ultralytics`

It intentionally does not bundle model checkpoints. Download weights separately and pass the checkpoint path at runtime.

The optional `mobilesamv2.promt_mobilesamv2` module now resolves its Ultralytics dependency from the vendored package in `mobilesam_lite._vendor.ultralytics`.

## Install locally

```bash
pip install -e .
```

## Install with pypi
```bash
pip install mobilesam-lite
```


## Example

```python
import torch

from mobilesam_lite.mobile_sam import SamPredictor, sam_model_registry

model = sam_model_registry["vit_t"]("./weight/mobile_sam.pt")
device = "cuda" if torch.cuda.is_available() else "cpu"
model.to(device)
model.eval()

predictor = SamPredictor(model)
```

## Verify an installed wheel

After installing the wheel into a clean environment, run:

```bash
python example_inference_mobilesam.py --checkpoint /path/to/mobile_sam.pt
```

You can also provide a real image:

```bash
python example_inference_mobilesam.py --checkpoint /path/to/mobile_sam.pt --image /path/to/image.jpg
```

The script prints the installed distribution version, the imported package path, and the output tensor shapes from one prediction call.

For the MobileSAMv2 decoder path, use:

```bash
python example_inference_mobilesamv2.py \
  --checkpoint /path/to/mobile_sam.pt \
  --prompt-decoder-checkpoint /path/to/Prompt_guided_Mask_Decoder.pt \
  --object-aware-model-checkpoint /path/to/ObjectAwareModel.pt \
  --image /path/to/image.jpg \
  --output-dir wheel_verify_mobilesamv2_output
```

This script runs the MobileSAMv2 seg-every pipeline with `ObjectAwareModel` box proposals plus the prompt-guided decoder.

Inputs:

- `--checkpoint`: image encoder checkpoint
- `--prompt-decoder-checkpoint`: `Prompt_guided_Mask_Decoder.pt`
- `--object-aware-model-checkpoint`: `ObjectAwareModel.pt`
- `--image`: optional input image path. If omitted, the script uses a synthetic test image.
- `--output-dir`: directory for generated visualizations
- Optional tuning args: `--encoder-type`, `--imgsz`, `--iou`, `--conf`, `--retina`, `--decoder-batch-size`, `--min-box-area-ratio`, `--max-box-area-ratio`

Outputs:

- Console summary with device, input image shape, detected box count, filtered box count, mask tensor shape, and saved output path
- `boxes.png`: detected boxes after filtering
- `mask_union.png`: binary union of all predicted masks
- `mask_union_overlay.png`: union mask blended over the input image
- `mask_overlay.png`: per-mask color overlay for the seg-every result

Example assets for the MobileSAMv2 seg-every flow:

Input image:

![MobileSAMv2 seg-every input](asset/input.png)

Output overlay:

![MobileSAMv2 seg-every output](asset/mask_overlay.png)

## Reference: Official MobileSAM repository 
https://github.com/chaoningzhang/mobilesam

If you find this repo useful to you please consider click the button below to donate and support my work!
[![Buy Me A Coffee](https://www.buymeacoffee.com/assets/img/custom_images/yellow_img.png)](https://buymeacoffee.com/bill2239)
