Metadata-Version: 2.4
Name: image-matching-models
Version: 1.0.0
Summary: Easily test and apply pairwise image matching models
Author-email: Alex Stoken <alex.stoken@gmail.com>, Gabriele Berton <berton.gabri@gmail.com>, Gabriele Trivigno <gabriele.trivigno@polito.it>
Maintainer-email: Alex Stoken <alex.stoken@gmail.com>, Gabriele Berton <berton.gabri@gmail.com>
License: BSD 3-Clause License
        
        Copyright (c) 2024, Alex Stoken, Gabriele Berton
        
        Redistribution and use in source and binary forms, with or without
        modification, are permitted provided that the following conditions are met:
        
        1. Redistributions of source code must retain the above copyright notice, this
           list of conditions and the following disclaimer.
        
        2. Redistributions in binary form must reproduce the above copyright notice,
           this list of conditions and the following disclaimer in the documentation
           and/or other materials provided with the distribution.
        
        3. Neither the name of the copyright holder nor the names of its
           contributors may be used to endorse or promote products derived from
           this software without specific prior written permission.
        
        THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
        AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
        IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
        DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
        FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
        DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
        SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
        CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
        OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
        OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
        
Project-URL: Homepage, https://github.com/gmberton/image-matching-models
Project-URL: Repository, https://github.com/gmberton/image-matching-models
Keywords: image matching
Classifier: Development Status :: 4 - Beta
Classifier: Programming Language :: Python
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: torch
Requires-Dist: torchvision
Requires-Dist: opencv-python
Requires-Dist: matplotlib
Requires-Dist: kornia
Requires-Dist: einops
Requires-Dist: transforms3d
Requires-Dist: kornia_moons
Requires-Dist: yacs
Requires-Dist: gdown>=5.1.0
Requires-Dist: huggingface_hub
Requires-Dist: safetensors
Requires-Dist: tables
Requires-Dist: imageio
Requires-Dist: vispy
Requires-Dist: pyglet==1.5.28
Requires-Dist: tensorboard
Requires-Dist: scipy
Requires-Dist: trimesh
Requires-Dist: e2cnn
Requires-Dist: scikit-learn
Requires-Dist: scikit-image
Requires-Dist: tqdm
Requires-Dist: py3_wget
Requires-Dist: roma
Requires-Dist: loguru
Requires-Dist: timm
Requires-Dist: omegaconf
Requires-Dist: poselib
Requires-Dist: lightning==2.3.3
Requires-Dist: flow_vis
Requires-Dist: uniception==0.1.1
Requires-Dist: h5py
Provides-Extra: omniglue
Requires-Dist: tensorflow; extra == "omniglue"
Provides-Extra: sphereglue
Requires-Dist: torch-geometric; extra == "sphereglue"
Requires-Dist: torch-cluster; extra == "sphereglue"
Provides-Extra: all
Requires-Dist: image-matching-models[omniglue,sphereglue]; extra == "all"
Dynamic: license-file

# Image Matching Models (IMM)

A unified API for quickly and easily trying 50+ (and growing!) image matching models.

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/gmberton/image-matching-models/blob/main/demo.ipynb)

Jump to: [Install](#install) | [Use](#use) | [Models](#available-models) | [Add a Model / Contributing](#adding-a-new-method) | [Acknowledgements](#acknowledgements) | [Cite](#cite)

### Matching Examples
Compare matching models across various scenes. For example, we show `SIFT-LightGlue` and `LoFTR` matches on pairs: 
<p>(1) outdoor, (2) indoor, (3) satellite remote sensing, (4) paintings, (5) a false positive, and (6) spherical. </p>
<details open><summary>
SIFT-LightGlue
</summary>
<p float="left">
  <img src="assets/example_sift-lightglue/output_3_matches.jpg" width="195" />
  <img src="assets/example_sift-lightglue/output_2_matches.jpg" width="195" />
  <img src="assets/example_sift-lightglue/output_4_matches.jpg" width="195" />
  <img src="assets/example_sift-lightglue/output_1_matches.jpg" width="195" />
  <img src="assets/example_sift-lightglue/output_0_matches.jpg" width="195" />
    <img src="assets/example_sift-lightglue/output_5_matches.jpg" width="195" />

</p>
</details>

<details open><summary>
LoFTR
</summary>
<p float="left">
  <img src="assets/example_loftr/output_3_matches.jpg" width="195" />
  <img src="assets/example_loftr/output_2_matches.jpg" width="195" />
  <img src="assets/example_loftr/output_4_matches.jpg" width="195" />
  <img src="assets/example_loftr/output_1_matches.jpg" width="195" />
  <img src="assets/example_loftr/output_0_matches.jpg" width="195" />
  <img src="assets/example_loftr/output_5_matches.jpg" width="195" />
</p>
</details>

### Extraction Examples
You can also extract keypoints and associated descriptors. 
<details open><summary>
SIFT and DeDoDe
</summary>
<p float="left">
  <img src="assets/example_sift-lightglue/output_8_kpts.jpg" width="195" />
  <img src="assets/example_dedode/output_8_kpts.jpg" width="195" />
  <img src="assets/example_sift-lightglue/output_0_kpts.jpg" width="195" />
  <img src="assets/example_dedode/output_0_kpts.jpg" width="195" />
</p>
</details>

## Install
IMM can be installed directly from PyPi using pip or uv (faster)
```bash
pip install image-matching-models
# or
uv pip install image-matching-models
```

or, for development, clone this git repo and install with:
```
Clone recursively and install packages:
```bash
git clone --recursive https://github.com/gmberton/image-matching-models
cd image-matching-models

pip install .
# or, if you want an editable install for dev work
pip install -e . 
```

Some models require additional optional dependencies which are not included in the default list, like torch-geometric (required by SphereGlue) and tensorflow (required by OmniGlue). To install these, use
```
pip install .[all]
# or 
uv pip install ".[all]"
```


## Use

You can use any of the over 50 matchers simply like this. All model weights are automatically downloaded by the IMM.

### Python API
```python
from matching import get_matcher
from matching.viz import plot_matches, plot_kpts

# Choose any of the 50+ matchers listed below
matcher = get_matcher("superpoint-lightglue", device="cuda")
img_size = 512  # optional

img0 = matcher.load_image("assets/example_pairs/outdoor/montmartre_close.jpg", resize=img_size)
img1 = matcher.load_image("assets/example_pairs/outdoor/montmartre_far.jpg", resize=img_size)

result = matcher(img0, img1)
# result.keys() = ["num_inliers", "H", "all_kpts0", "all_kpts1", "all_desc0", "all_desc1", "matched_kpts0", "matched_kpts1", "inlier_kpts0", "inlier_kpts1"]

# This will plot visualizations for matches as shown in the figures above
plot_matches(img0, img1, result, save_path="plot_matches.png")

# Or you can extract and visualize keypoints as easily as
result = matcher.extract(img0)
# result.keys() = ["all_kpts0", "all_desc0"]
plot_kpts(img0, result, save_path="plot_kpts.png")
```

### Command Line Interface / Standalone Scripts
You can also run matching or extraction as standalone scripts, to get the same results as above. 
#### Matching:
```bash
# if you cloned this repo, imm_match.py is available, else see CLI below
python imm_match.py --matcher superpoint-lightglue --out_dir outputs/superpoint-lightglue --input assets/example_pairs/outdoor/montmartre_close.jpg assets/example_pairs/outdoor/montmartre_far.jpg
# or
uv run imm_match.py --matcher superpoint-lightglue --out_dir outputs/superpoint-lightglue --input assets/example_pairs/outdoor/montmartre_close.jpg assets/example_pairs/outdoor/montmartre_far.jpg
```
From any location where an python enviroment with IMM installed is active, you can also run
```bash
# for PyPi install, use CLI entry point
imm-match --matcher superpoint-lightglue --out_dir outputs/superpoint-lightglue --input path/to/img0 --input path/to/img2
```
#### Keypoints extraction:
```bash
# if you cloned this repo, imm_extract.py is available, else see CLI below
python imm_extract.py --matcher superpoint-lightglue --out_dir outputs/superpoint-lightglue --input assets/example_pairs/outdoor/montmartre_close.jpg
# or
uv run imm_extract.py --matcher superpoint-lightglue --out_dir outputs/superpoint-lightglue --input assets/example_pairs/outdoor/montmartre_close.jpg
```
From any location where an python enviroment with IMM installed is active, you can also run

```bash
# for PyPi install, use CLI entry point
imm-extract --matcher superpoint-lightglue --out_dir outputs/superpoint-lightglue --input path/to/img0
```

These scripts can take as input images, folders with multiple images (or multiple pairs of images), or files with pairs of images paths.
To see all possible parameters run
```bash
python imm_match.py -h
# or
python imm_extract.py -h
```


## Available Models
We support the following methods:

**Dense**: ```roma, tiny-roma, duster, master, minima-roma, ufm```

**Semi-dense**: ```loftr, eloftr, se2loftr, xoftr, minima-loftr, aspanformer, matchformer, xfeat-star, xfeat-star-steerers[-perm/-learned], edm, rdd-star, topicfm[-plus]```

**Sparse**: ```[sift, superpoint, disk, aliked, dedode, doghardnet, gim, xfeat]-lightglue, dedode, steerers, affine-steerers, xfeat-steerers[-perm/learned], dedode-kornia, [sift, orb, doghardnet]-nn, patch2pix, superglue, r2d2, d2net,  gim-dkm, xfeat, omniglue, [dedode, xfeat, aliked]-subpx, [sift, superpoint]-sphereglue, minima-superpoint-lightglue, liftfeat, rdd-[sparse,lightglue, aliked], ripe, lisrd```

See [Model Details](docs/model_details.md) to see runtimes, supported devices, and source of each model.

## Adding a new method
See [CONTRIBUTING.md](CONTRIBUTING.md) for details. We follow the [1st principle of PyTorch](https://docs.pytorch.org/docs/stable/community/design.html#design-principles): Usability over Performance

## Acknowledgements
Special thanks to the authors of all models included in this repo (links in [Model Details](docs/model_details.md)), and to authors of other libraries we wrap like the [Image Matching Toolbox](https://github.com/GrumpyZhou/image-matching-toolbox/tree/main) and [Kornia](https://github.com/kornia/kornia).

## Cite
This repo was created as part of the EarthMatch paper. Please cite EarthMatch if this repo is helpful to you!

```
@InProceedings{Berton_2024_EarthMatch,
    author    = {Berton, Gabriele and Goletto, Gabriele and Trivigno, Gabriele and Stoken, Alex and Caputo, Barbara and Masone, Carlo},
    title     = {EarthMatch: Iterative Coregistration for Fine-grained Localization of Astronaut Photography},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month     = {June},
    year      = {2024},
}
```
