Metadata-Version: 2.4
Name: acmenra-cv
Version: 0.1.5.5
Summary: Production-ready computer vision utilities for multi-object tracking
Author-email: "acmenra.studio" <hello@acmenra.ru>
Project-URL: Homepage, https://acmenra.studio
Project-URL: Documentation, https://github.com/Acmenra/acmenra-cv#readme
Project-URL: Repository, https://github.com/Acmenra/acmenra-cv
Keywords: acmenra-cv
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Programming Language :: Python :: 3.11
Classifier: Topic :: Scientific/Engineering :: Image Recognition
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Requires-Python: >=3.11
Description-Content-Type: text/markdown
Requires-Dist: numpy>=1.21.0
Requires-Dist: opencv-python>=4.5.0
Requires-Dist: ultralytics>=8.0.0
Provides-Extra: dev
Requires-Dist: pytest>=7.0.0; extra == "dev"
Requires-Dist: black>=23.0.0; extra == "dev"
Requires-Dist: mypy>=1.0.0; extra == "dev"

# 🚀 acmenra-cv

> **Production-ready computer vision utilities for ADAS, multi-object tracking, and embedded vision systems**

[![PyPI](https://img.shields.io/pypi/v/acmenra-cv.svg)](https://pypi.org/project/acmenra-cv/)
[![Python](https://img.shields.io/pypi/pyversions/acmenra-cv.svg)](https://pypi.org/project/acmenra-cv/)
[![License](https://img.shields.io/pypi/l/acmenra-cv.svg)](LICENSE)

```bash
pip install acmenra-cv
```

---

## 📦 Overview

**acmenra-cv** is a high-performance, type-safe computer vision library engineered for **real-time applications on resource-constrained embedded systems** (Raspberry Pi 5, Jetson, etc.). Built following Clean Architecture principles, it provides three cohesive modules:

| Module | Purpose | Key Features |
|--------|---------|-------------|
| **`YOLOUtils`** | Spatial primitives for YOLO outputs | Normalized coordinates, strict validation, immutable transformations |
| **`tracker`** | Multi-object tracking with trajectories | Persistent IDs, configurable history, BoT-SORT integration |
| **`render`** | Type-safe visualization layer | Alpha-blended overlays, embedded optimizations, graceful degradation |

All components operate in **normalized coordinate space `[0.0, 1.0]`** by default, ensuring resolution independence across varying camera inputs and inference resolutions.

---

## ✨ Key Features

### 🔹 Unified Architecture
- **Strict type & range validation** (`[0.0, 1.0]` with `safe()` clamping factory)
- **Seamless YOLO integration** (`boxes`, `masks.xyn`, `obb.xyxyxyxyn`)
- **Immutable geometric transformations** (`scale`, `translate`, `smooth`)
- **Full type safety** with IDE autocomplete and consistent API across all primitives

### 🔹 Embedded-Ready Performance
- **Zero-crash OpenCV integration** with `@validate_frame` decorator
- **Global `show=False` toggle** to bypass all rendering for headless/embedded deployments
- **Memory-efficient trajectory queues** with O(1) average calculation
- **Alpha-blended compositing** (`cv2.addWeighted`) and anti-aliased geometry (`LINE_AA`)

### 🔹 ADAS & Safety-Critical Design
- **Trajectory history management** for zone crossing and collision detection
- **Temporal metadata** (`TimedPoint`) for velocity/direction estimation
- **Configurable thresholds** (`conf`, `iou`, `max_length`) for dynamic adaptation
- **Graceful degradation** on invalid inputs — no exceptions, just safe fallbacks

---

## 🧩 Module Documentation

### 🔷 [utils](#utils) — Spatial Primitives
> Validated geometric containers for YOLO detection outputs

<details><summary><strong>Classes</strong></summary>

#### `Point` — Validated 3D normalized coordinates
* `__init__()`: Initializes with X, Y, Z. Validates `float` type and `[0.0, 1.0]` range.
* `X`, `Y`, `Z`: Properties with strict type and range validation.
* `get_distance()`: Euclidean distance to another point (includes Z).
* `scale()`, `translate()`: Immutable transformations returning new instances.
* `safe()`: Class method factory with coordinate clamping — no exceptions.

#### `Box` — Axis-aligned 3D bounding box
* `__init__()`: Six boundaries (`left`, `right`, `top`, `bottom`, `front`, `back`).
* `center`, `bottom_center`: Computed properties for tracking.
* `width`, `height`, `depth`: Dimension properties.
* `get_area()`, `get_volume()`: Geometric calculations.
* `to_absolute_array()`: Converts to pixel corners for OpenCV.

#### `Polygon` — Segmentation mask container
* `from_yolo_xyn()`: Static factory from YOLO `masks.xyn`.
* `smooth()`: Vertex smoothing via moving average.
* `get_area()`: Shoelace formula for normalized area.
* `__getitem__()`: Supports slicing — returns new `Polygon`.

#### `Obb` — Oriented (rotated) bounding box
* `from_yolo_obb()`: Static factory from YOLO `obb.xyxyxyxyn`.
* `angle`: Cached rotation angle `(-90° to +90°)`.
* `width`, `height`: Average edge lengths for rotated rectangles.

#### `YOLOInstance` — Unified detection container
* Combines `id`, `class_id`, `category`, `conf`, `box`, `polygon`, `obb`.
* Strict validation on all properties.
* Designed for safe pipeline integration.

</details>

---

### 🔷 [tracker](#tracker) — Multi-Object Tracking
> Persistent IDs, trajectory management, ADAS integration

<details><summary><strong>Classes</strong></summary>

#### `Tracker` — Main tracking engine
* `__init__()`: Configurable with YOLO model, device, thresholds, `max_length`.
* `track()`: Main entry point — returns `List[TrackedObject]` with persistent IDs.
* `predict()`, `double_predict()`: Detection modes (single / panoramic).
* Properties: `model`, `device`, `conf`, `iou`, `max_length` — all validated.

#### `TrackedObject` — Single tracked entity
* `id`: Tracking identifier (`Optional[int]`).
* `instance`: Latest `YOLOInstance` detection data.
* `trajectory`: `TimedPointQueue` with historical positions.

#### `TimedPointQueue` — Fixed-length trajectory history
* `enqueue()`, `dequeue()`: FIFO with auto-eviction.
* `average_x/y/z`: O(1) incremental centroid calculation.
* `get_values()`: Deep-copy snapshot for safe external access.

#### `TimedPoint` — Time-stamped spatial point
* Extends `Point` with `timestamp: Optional[datetime]`.
* `to_point()`: Discards temporal metadata for geometry-only ops.

</details>

---

### 🔷 [render](#render) — Visualization Layer
> Type-safe drawing operations for embedded systems

<details><summary><strong>Classes</strong></summary>

#### `Drawer` — Main rendering engine
* `draw_instances()`: Renders multiple objects with alpha-blended overlays 🔥
* `draw_box()`, `draw_obb()`, `draw_polygon()`, `draw_trajectory()`: Individual element rendering.
* `draw_text()`: Absolute pixel coordinate text rendering.
* `@validate_frame` decorator on all public methods — zero-crash guarantee.

#### `Style` — Centralized visualization config
* `palette`: Unique RGB tuples with `[0, 255]` validation.
* `thickness`, `rounding`, `segment`, `smooth`, `alpha`: All range-validated.
* `show`: Global toggle — `False` bypasses all rendering for performance.

#### `Font` — OpenCV typography config
* `color`: 3-element RGB tuple validation.
* `font`: OpenCV font identifier `[0, 7]`.
* `font_scale`, `thickness`: Positive integer validation.

</details>

---

## 💡 Quick Start

```python
from acmenra_cv.model import Point, Box, Instance
from acmenra_cv.tracker import Tracker, TimedPointQueue
from acmenra_cv.render import Drawer, Style, Font
from ultralytics import YOLO

# 1. Initialize components
model = YOLO("yolov8n.pt")
style = Style(
    palette=[(255, 0, 0), (0, 255, 0), (0, 0, 255)],
    font=Font(color=(255, 255, 255)),
    show=True
)
tracker = Tracker(model=model, category=YourEnum, max_length=50)
drawer = Drawer(style=style)

# 2. Process a frame
frame = ...  # Your BGR frame (numpy array)
tracked_objects = tracker.track(frame, enable_tracking=True)

# 3. Render results
output = drawer.draw_instances(
    frame=frame,
    tracked_objects=tracked_objects,
    is_box=True,
    is_trajectory=True
)

# 4. Use spatial data for business logic
for obj in tracked_objects:
    if obj.trajectory.count >= 5:
        speed = estimate_speed(obj.trajectory)  # Your logic
        if obj.instance.category == YourEnum.car and speed > threshold:
            trigger_alert(obj)
```

---

## 📋 Requirements

```txt
numpy>=1.21.0
opencv-python>=4.5.0
ultralytics>=8.0.0
```

Optional for development:
```txt
pytest>=7.0.0
black>=23.0.0
mypy>=1.0.0
```

---

## 🔐 License

© 2026 **acmenra.studio**. All rights reserved.

This software is proprietary and confidential. Unauthorized copying, distribution, or use is strictly prohibited.

For commercial licensing inquiries: [contact@acmenra.studio](mailto:contact@acmenra.studio)

---

## 🌐 Links

- **PyPI**: https://pypi.org/project/acmenra-cv/
- **Source**: https://github.com/acmenra/acmenra-cv
- **Documentation**: https://github.com/acmenra/acmenra-cv#readme
- **Issues**: https://github.com/acmenra/acmenra-cv/issues

---

> **acmenra.studio** — Building reliable vision systems for the edge.  
> *Every millisecond and frame buffer counts.* 🚀
