Metadata-Version: 2.4
Name: embedl-deploy
Version: 0.3.0
Summary: Python package to make AI models deployment-ready for any hardware.
Author-email: Embedl AB <support@embedl.com>
Project-URL: Homepage, https://www.embedl.com/
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3 :: Only
Classifier: License :: Other/Proprietary License
Classifier: Operating System :: POSIX :: Linux
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.10
Description-Content-Type: text/markdown
License-File: LICENSE
License-File: NOTICE
Requires-Dist: torch
Provides-Extra: tensorrt
Requires-Dist: tensorrt; extra == "tensorrt"
Dynamic: license-file

# embedl-deploy

Python package to make AI models deployment-ready for any hardware.

## Why embedl-deploy

PyTorch models are flexible, but edge hardware is not. Hardware toolchains may
fail due to unsupported operators, apply implicit transformations and fusions
during compilation and quantization leading to deployment issues.

`embedl-deploy` eliminates these surprises by enforcing hardware and compiler
constraints directly in PyTorch, so what you build, train, and debug is what
actually runs on the device. It converts your models to be compatible for the
hardware target ensuring correct quantization and compilation.

## Features

- **Hardware-accurate PyTorch Intermediate Representation (IR):**
  Build models using a hardware-aware PyTorch intermediate representation that
  mirrors the behavior of the compiled artifact, e.g., fused convolutions.
  Unsupported operators and compatibility issues are surfaced early and
  resolved explicitly before compilation, within PyTorch.

- **Quantization:**
  Supports post-training quantization (PTQ) and quantization-aware-training
  (QAT). Fake quantization is applied in PyTorch with explicit quantization
  operator placements. PTQ methods are included and QAT can be applied directly
  to the transformed and quantized models with no additional dependencies
  required.

- **Guaranteed deployable artifacts:**
  Produce optimized compilation artifacts ready for deployment on the target
  device with predictable performance and accuracy.


## Supported Backends

| Backend             | Status      |
|---------------------|-------------|
| NVIDIA TensorRT     | Supported   |

Contact us for other backends.

## Installation

```bash
pip install embedl-deploy
```
Note that you may need to also install `onnx` and `onnx-simplifier` to export
and get the exported model compiled with TensorRT if using ONNX as an
intermediate.

---

## Quick Start

```python
import torch
from embedl_deploy import transform
from embedl_deploy.quantize import quantize
from embedl_deploy.tensorrt import TENSORRT_PATTERNS
from torchvision.models import resnet18 as Model

# 1. Load a standard PyTorch model
model = Model().eval()
example_input = torch.randn(1, 3, 224, 224)

# 2. Transform — fuse and optimize for TensorRT in one call
res = transform(model, patterns=TENSORRT_PATTERNS)
print("Model\n", res.model.print_readable())
print("Matches", "\n".join([str(match) for match in res.matches]))


# 3. Quantize (PTQ)
def calibration_loop(model: torch.fx.GraphModule):
    model.eval()
    for _ in range(100):
        model(torch.randn(1, 3, 224, 224))


quantized_model = quantize(
    res.model, (example_input,), forward_loop=calibration_loop
)
quantized_model.eval()

# 4. Export as usual (dynamo exported models may have compilation issues)
torch.onnx.export(
    quantized_model, (example_input,), "model.onnx", dynamo=False
)

# 5. Quantization-aware training with a training loop
qat_model = quantized_model.train()
# Freeze BatchNorm, or apply other QAT utilities as needed
# train(qat_model)

# Compile
# -------
# Compilation can be done with TensorRT's trtexec tool, which can take the ONNX
# model and compile it for inference. The exported layer info and profile can
# be used for debugging, optimization and visualization.
#
# Note: that the ONNX model might need to be simplified with onnx-simplifier to
# make trtexec compile it. Dynamo exported models may have compilation issues,
# so it's recommended to export with dynamo=False.
#
# We are working on a Aten-based export path that should be more robust and
# support more models in the future.

# >> onnxsim model.onnx model.onnx
# >> trtexec \
#       --onnx=model.onnx \
#       --exportLayerInfo=layer_info.json \
#       --exportProfile=profile.json \
#       --profilingVerbosity=detailed

# More benchmarking scripts can be found in the examples/ directory
```

## Design Principles

1. **Patterns are the only abstraction.**
   Every graph transformation — fusion, conversion, quantization — is a
   `Pattern` subclass. Adding a new backend (TIDL, QNN, …) means defining a
   new set of `Pattern` subclasses and fused modules with quantization
   information. The core plan/apply machinery stays the same.

2. **Plans are editable.**
   `get_transformation_plan()` returns a plan the user can inspect and edit
   before applying. Toggle `match.apply = False` to skip specific matches.
   `transform()` is a convenience for the common case where you want
   everything applied.

3. **FX-graph-based.**
   All graph analysis and surgery uses `torch.fx`. Models are traced once
   and manipulated as `fx.GraphModule` objects. Support for Aten graphs
   produced by `torch.export.export` is planned for the future.

## Support

- [GitHub Issues](https://github.com/embedl/embedl-deploy/issues)
- Maintainers: The Embedl Team
- Contact: Shahnawaz Ahmed | shahnawaz@embedl.com | @quantshah

## License

Free for non-commercial use within the Embedl Community License (v.1.0).

Please [Contact us](https://embedl.com/contact) for commercial licensing.

Copyright (C) 2026 Embedl AB
