Metadata-Version: 2.3
Name: flat-bug
Version: 1.1.1
Summary: Universal Arthropod Localization and Instance Segmentation
Keywords: deep learning,object detection,instance segmentation,arthropods
Author: Asger Svenning, Quentin Geissman
Author-email: Asger Svenning <asgersvenning@ecos.au.dk>
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.11
Classifier: Programming Language :: Python :: 3.12
Classifier: Programming Language :: Python :: 3.13
Classifier: Programming Language :: Python :: 3.14
Classifier: Operating System :: OS Independent
Requires-Dist: torch>=2.2.0
Requires-Dist: torchvision>=0.17.0
Requires-Dist: ultralytics>=8.2.16,<=8.3.124
Requires-Dist: shapely>=2.0.2
Requires-Dist: scikit-optimize>=0.10.1
Requires-Dist: scipy>=1.14.1
Requires-Dist: boto3>=1.40 ; extra == 'cloud-datasets'
Requires-Dist: cvat-sdk>=2.47 ; extra == 'cloud-datasets'
Requires-Dist: pyremotedata>=0.0.16 ; extra == 'erda'
Requires-Python: >=3.11
Project-URL: Homepage, https://github.com/darsa-group/flat-bug
Project-URL: Bug Tracker, https://github.com/darsa-group/flat-bug/issues
Provides-Extra: cloud-datasets
Provides-Extra: erda
Description-Content-Type: text/markdown

# <code> flatbug </code>

<div align="center">
    <h3 class="heading-element" dir="auto"><ins>A General Method for Detection and Segmentation of Terrestrial Arthropods in Images</ins></h3>
    <img src="prediction.jpg" style="width: 75%;">
</div>

[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/darsa-group/flat-bug/blob/master/docs/flat-bug.ipynb)
[![](https://img.shields.io/badge/Methods%20in%20Ecology%20and%20Evolution-10.1111/2041%2D%2D210x.70249-B52838?style=flat&logo=data:image/webp;base64,UklGRvgFAABXRUJQVlA4TOsFAAAvt8AtELchEEhi3d9hDYFAkj/nHgsJEib7v8iwbRtHt/+u970aihJ/VwNCFID4AW4IAD4EYgI0frz4EDQOfAA/CmBFIAoCCCBEAEQgxA0BCkIIUQraAoAmkqRm3N3dZ3rcuuf/f+22O0gI9wohov+QIElu2/RBWbROdEkAE6D3n7YYfb4+XJwc7GysLM5BeF/Z2MmTi4fXz9GfWRHvfro9C4zI7VN/N2fS7+HpBgiF57Cb8f99PVsD4Uh/HTXv9+kAKkV2b3n650OoGv7c6OQf5zNQPfq5N2c83AUl4dmb+jTdroai6LfezLXLhVAWPZu44lfzoDDmU/2V8f0yKA2/76q9bIHi6C96fR6D8uifSqe6mwP14dk1Vm/70UR013dvNBPZVfk6gobiqGv6+lqHpiJf1DztGpoL7yr8nESD0VPD99ceNBl7Xt3bJjQam72y4RI0Gzms6hGajseK7qDxyGpuoPm4qXeyudyBCZF1/5qtf2wIZoT0i/YlO6RsxeebYEhsumQTbg9Mib2UazCfgDHRx1Kuw5xwqaUqMCheZJZh1y2SXeJJR2BSHHW5ZsWgEKw8CumUs0DShCBko7Sh8REEgechopSkGuHc+/ZJAs+DC01SjT7mNj5pAs+DCc0xKBTe9TmnCkTo4p8cx4AovEVyHoGliJlISZXorJqOIlAhZip9EoW13njLOn1MdR+qhEbh1C10y4og80UT/6W5CppBRaFQJO1yvoBQv0lPVigKzTvFJdAh9YiQIKdcXyTlcoGO1N7gKhd1uuNuQxCwIUWb6HidsRpyN0BIaNXHlPVLjMkqiSC3Q5Wtf9r9R6TekBU6ObqYIin40KJNeNm5MoMIzc7Le8TNcKGfM0GhUh+VPIOlwksOpQVbaf7o5KU6OjgG+IvImM5NqfBpT2QREUQhIiLU6tMOxIQVcnpqIGOt7KLVG3LrbRTE1MSvbCCGup5MpddJZ1wgDkkbdPJJa2Fu5OTBBWFw9MzMocXyn1OL5T8bFvOJWsPkGGW+2yz/8sFmPvFMo5+4jeFvVyrmhda/tKzxRIBUs6NJo++NKv9yVgLe3EDyICBlGaSs0QSl5X9TSuBu1aMY8DdeltKnzlkBMfoUgTfk0TxwA+a2tGmEBIV9vlrt1VkggtaeJ+VBzRIDETTELwgBfqEbYb7QReETKKafCIuSyfSKSClr1g+sljtCkMyrClY+wnzDGIXqhc9XDAIyVhYl6o3WQISIPteO6beihYdVgi2MMZDkVvPe1mSa9BWrrWzYIyJE+I6gwvxRaDIFdNsKu32R6iCzR7M8EYZ/rw3a1S/aMJ0uA5aXiP5QZbmWm0fF5VrC+hTWXif+ag6Q8vqpZL1NOct6622EfY6EBL6aAp197CyDlDWevrTR36yO9ZL8LIMy+9D0peVfbofJkXavv7Z7e4Hd22fs3h5m9vZHu7f32r193e79GRj7j9CqZnrdS69aSXsSo3V1vf1HOPvrkJpCjLZOg5yzfxSz6QloHu15FdsfLaLm8WaMJnJVI4H9/yDCIkna35K56iCA/QLE2UD4Tql/u8vs3xoRTNEiL+1PzFd8ozUV0q59coH9t8Mozt1fHnnnOHTGiIu6z+4jgeMTIkKrqOocOx6EXvSgFB3Hg+Qup1kkrfh/VMrxfm04RZuofZn48WWsYoo+JhzPRyoW6aTjJzlXDNGddLwqp9SaTKWkHR/MKrpEzdO8yxyPLWE6j3Yk9fh3orCEU3udvucovlzTXKx/BxmDlvSxWH8aAZWXD/V54fRfQloeN0Tn9RdjGf/kddJYa71NG5LbH1Kd9WRt6GNu/1NV1ks2wgW68zQq0u7+1czuz87s/gPN7q/R7v4xze6P1Oz+X83ub9fs/o3N7k/a7P67ze4v3ez+6c0eD8Ds8RfMHu/C7PFFzB7Pxezxc8wer8jq8aHMHo/L7PHPrB5vzuzx/aweT9Hq8SvNHi/U6vFZrR4P1+rxh60e79no8bWtHs/c6PHjjR6v3+bfR/A//3sUzP79DwkA&logoColor=white&labelColor=gray)](https://doi.org/10.1111/2041-210x.70249)
[![](https://img.shields.io/badge/Zenodo-10.5281/zenodo.14761446-0377cd?style=flat&logo=doi&logoColor=white&labelColor=gray)](https://doi.org/10.5281/zenodo.14761446)

[![PyPI version](https://img.shields.io/pypi/v/flat-bug.svg)](https://pypi.org/project/flat-bug/)
[![Python Versions](https://img.shields.io/pypi/pyversions/flat-bug.svg)](https://pypi.org/project/flat-bug/)
[![CI Status](https://github.com/darsa-group/flat-bug/actions/workflows/ci.yml/badge.svg)](https://github.com/darsa-group/flat-bug/actions/workflows/ci.yml)
[![Code style: ruff](https://img.shields.io/badge/code%20style-ruff-000000.svg)](https://github.com/astral-sh/ruff)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)

<div align="center">
    <table><tr>
        <td><strong> Find and cite the <a href="https://doi.org/10.1111/2041-210x.70249">flatbug paper in Method in Ecology and Evolution</a></strong></td>
        <td><strong> Send us new data through our <a href="https://forms.gle/hQe2dzLs4tHcCarEA">data contribution form<a>. </strong></td>
    </tr></table>
</div>

---

`flatbug` is partly a high-performance pyramid tiling inference wrapper for [`YOLOv8`](https://github.com/ultralytics/ultralytics) and partly a hybrid instance segmentation dataset of terrestrial arthropods accompanied by an appropriate training schedule for `YOLOv8` segmentation models, built on top of the original [`YOLOv8` training schedule](https://docs.ultralytics.com/modes/train/#why-choose-ultralytics-yolo-for-training). 

The goal of `flatbug` is to provide a single unified model for detection and segmentation of all terrestrial arthropods on arbitrarily large images, especially fine-tuned for the case of top-down images/scans - thus the name `"flat"bug`.

---

## Installation

We recommend using `uv` ([*installation*](https://docs.astral.sh/uv/getting-started/installation/)):

```bash
# Easy-install
uv pip install flat-bug --torch-backend=auto
# Add to a project permanently (recommended)
uv add flat-bug
```

> [!TIP]
> If you have problems with PyTorch not being installed with CUDA enabled try:
> ```bash
> uv pip install torch torchvision --torch-backend=auto --reinstall
> ```
> More details:
> https://docs.astral.sh/uv/guides/integration/pytorch/#the-uv-pip-interface

or *(not recommended)*:

```bash
pip install flat-bug
```

### Source/development

Or a development version can be installed from source by cloning this repository:

#### Clone the repository

```bash
git clone https://github.com/darsa-group/flat-bug.git
cd flat-bug
```

#### Install `flatbug`

```bash
uv sync --all-extras --all-groups --upgrade 
# (optional but recommended)
uv pip install torch torchvision --torch-backend=auto --reinstall
```

or *(not recommended)*:

```bash
pip install -e .
```

> [!WARNING]
> If you do decide to install with `pip`, as with other packages built with `PyTorch` it is best to ensure that `torch` is installed separately. See [https://pytorch.org/](https://pytorch.org/get-started/locally) for details. We recommend using `torch>=2.3`.

---

## CLI Usage

We provide a number of [CLI scripts](https://darsa.info/flat-bug/cli.html) with `flatbug`. The main one of interest is `fb_predict`, which can be used to run inference on images or videos:

```bash
[uv run] fb_predict -i <DIR_WITH_IMGS> -o <OUTPUT_DIR> [-w <WEIGHT_PATH>] ...
```

## Tutorials

We provide a number of tutorials on general and advanced usage, training, deployment and hyperparameters of `flatbug` in [examples/tutorials](examples/tutorials) or with Google Colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/darsa-group/flat-bug/blob/master/docs/flat-bug.ipynb).

## Documentation

Find our documentation at [https://darsa.info/flat-bug/](https://darsa.info/flat-bug/).

---

## CUDA Issues

Working with cross-platform PyTorch code can be a bit confusing, so if you ever get stuck with some CUDA errors, here are some possible paths to resolve the issues.

### `uv` and `pip`

If you installed `flat-bug` via a package manager but find that GPU acceleration is not working, your environment likely downloaded the default PyPI wheels which may not match your system's NVIDIA drivers.

**If you are using `uv`**, the easiest fix is to force a re-resolution of the PyTorch backend:

```bash
# Automatically detect hardware and reinstall PyTorch
uv pip install torch torchvision --torch-backend=auto --reinstall

# OR manually force a specific CUDA version (e.g., CUDA 11.8)
uv pip install torch torchvision --torch-backend=cu118 --reinstall
```

**If you are using standard `pip`**, you must manually point to the PyTorch index that matches your system:

```bash
# Uninstall the broken versions
pip uninstall torch torchvision

# Reinstall pointing explicitly to the CUDA 11.8 or 12.1 (cu121) index
pip install torch torchvision --index-url https://download.pytorch.org/whl/cu118
```

**Verification:**

```bash
[uv run] python -c "import torch; print(f'CUDA Available: {torch.cuda.is_available()}')"
```


### Source

Rebuild the environment and lockfile from scratch:

```bash
# cd ~/flat-bug

# 1. Purge old state
rm uv.lock
rm -rf .venv

# 2. Generate the pure, cross-platform lockfile
uv lock

# 3. Create your local environment
uv sync --all-extras --all-groups

# 4. Patch your local environment with your specific hardware backend
uv pip install torch torchvision --torch-backend=auto --reinstall
```

<!-- fixme: Remember to add this later!
### Archive

## Models

## Data

---

## Contributions

### Code

### Data

-->